Skip to content

Error during creation of azurerm_kubernetes_cluster resource: 400 BadRequest UnmarshalError #31580

@Timo-Weike

Description

@Timo-Weike

Is there an existing issue for this?

  • I have searched the existing issues

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave comments along the lines of "+1", "me too" or "any updates", they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment and review the contribution guide to help.

Terraform Version

1.10.6

AzureRM Provider Version

4.57.0

Affected Resource(s)/Data Source(s)

azurerm_kubernetes_cluster

Terraform Plan Output

# module.<module-name>.module.aks.azurerm_kubernetes_cluster.this will be created
    resource "azurerm_kubernetes_cluster" "this" {
        ai_toolchain_operator_enabled       = false
        azure_policy_enabled                = true
        current_kubernetes_version          = (known after apply)
        dns_prefix                          = "<aks-name>"
        fqdn                                = (known after apply)
        http_application_routing_zone_name  = (known after apply)
        id                                  = (known after apply)
        image_cleaner_interval_hours        = 48
        kube_admin_config                   = (sensitive value)
        kube_admin_config_raw               = (sensitive value)
        kube_config                         = (sensitive value)
        kube_config_raw                     = (sensitive value)
        kubernetes_version                  = "1.34.1"
        location                            = "northcentralus"
        name                                = "<aks-name>"
        node_os_upgrade_channel             = "NodeImage"
        node_resource_group                 = (known after apply)
        node_resource_group_id              = (known after apply)
        oidc_issuer_enabled                 = true
        oidc_issuer_url                     = (known after apply)
        portal_fqdn                         = (known after apply)
        private_cluster_enabled             = true
        private_cluster_public_fqdn_enabled = true
        private_dns_zone_id                 = "None"
        private_fqdn                        = (known after apply)
        resource_group_name                 = "<some-rg>"
        role_based_access_control_enabled   = true
        run_command_enabled                 = true
        sku_tier                            = "Standard"
        support_plan                        = "KubernetesOfficial"
        tags                                = {
            ...
        }
        workload_identity_enabled           = true

        auto_scaler_profile (known after apply)

        azure_active_directory_role_based_access_control {
            admin_group_object_ids = [
                "<some-object-id>",
            ]
            azure_rbac_enabled     = true
            tenant_id              = (known after apply)
        }

        bootstrap_profile (known after apply)

        default_node_pool {
            host_encryption_enabled     = false
            kubelet_disk_type           = (known after apply)
            max_pods                    = 200
            name                        = "system"
            node_count                  = 3
            node_labels                 = (known after apply)
            orchestrator_version        = (known after apply)
            os_disk_size_gb             = (known after apply)
            os_disk_type                = "Managed"
            os_sku                      = (known after apply)
            scale_down_mode             = "Delete"
            temporary_name_for_rotation = "systemtemp"
            type                        = "VirtualMachineScaleSets"
            ultra_ssd_enabled           = false
            vm_size                     = "Standard_E4s_v5"
            vnet_subnet_id              = "/subscriptions/<subscruption-id>/resourceGroups/<vnet-rg-name>/providers/Microsoft.Network/virtualNetworks/<vnet-name>/subnets/<snet-name>"
            workload_runtime            = (known after apply)

            upgrade_settings {
                drain_timeout_in_minutes      = 30
                max_surge                     = "10%"
                node_soak_duration_in_minutes = 0
            }
        }

        identity {
            principal_id = (known after apply)
            tenant_id    = (known after apply)
            type         = "SystemAssigned"
        }

        kubelet_identity (known after apply)

        maintenance_window_node_os {
            day_of_week = "Sunday"
            duration    = 4
            frequency   = "RelativeMonthly"
            interval    = 1
            start_date  = "2025-11-09T00:00:00Z"
            start_time  = "08:00"
            utc_offset  = "+00:00"
            week_index  = "Fourth"
        }

        network_profile {
            dns_service_ip      = "192.168.0.10"
            ip_versions         = (known after apply)
            load_balancer_sku   = "standard"
            network_data_plane  = "azure"
            network_mode        = (known after apply)
            network_plugin      = "azure"
            network_plugin_mode = "overlay"
            network_policy      = (known after apply)
            outbound_type       = "loadBalancer"
            pod_cidr            = "192.168.128.0/17"
            pod_cidrs           = (known after apply)
            service_cidr        = "192.168.0.0/17"
            service_cidrs       = (known after apply)

            load_balancer_profile {
                backend_pool_type           = "NodeIPConfiguration"
                effective_outbound_ips      = (known after apply)
                idle_timeout_in_minutes     = 30
                managed_outbound_ip_count   = (known after apply)
                managed_outbound_ipv6_count = (known after apply)
                outbound_ports_allocated    = 0
            }

            nat_gateway_profile (known after apply)
        }

        node_provisioning_profile (known after apply)

        upgrade_override {
            force_upgrade_enabled = false
        }

        windows_profile (known after apply)
    }

Debug Output/Panic Output

╷
│ Error: creating Kubernetes Cluster (Subscription: "1cfa2dc5-7ad0-42cd-a8a9-d411eee5afa9"
│ Resource Group Name: "prod-102-usnc-aks-2-rg"
│ Kubernetes Cluster Name: "aks-wcs-prod-usnc-2"): performing CreateOrUpdate: unexpected status 400 (400 Bad Request) with response: {
│   "code": "UnmarshalError",
│   "details": null,
│   "message": "Invalid request body. Converting request body to a managed cluster encountered error: parsing time \"\" as \"2006-01-02T15:04:05Z07:00\": cannot parse \"\" as \"2006\".",
│   "subcode": ""
│  }
│ 
│   with module.usnc_aks2.module.aks.azurerm_kubernetes_cluster.this,
│   on .terraform/modules/usnc_aks2.aks/cluster.tf line 29, in resource "azurerm_kubernetes_cluster" "this":
│   29: resource "azurerm_kubernetes_cluster" "this" {
│ 
╵

Expected Behaviour

Creation of the specified AKS

Actual Behaviour

The AzureRM provider sends a Bad-Request to the azure api

Steps to Reproduce

No response

Important Factoids

No response

References

No response

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions