Skip to content

Forced Replacement due to Change in Resource ID Casing #32031

@ozen10

Description

@ozen10

Is there an existing issue for this?

  • I have searched the existing issues

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave comments along the lines of "+1", "me too" or "any updates", they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment and review the contribution guide to help.

When trying to rerun Terraform, it is forcing a replacement of an Azure resource due to a change in the casing of the resource ID. Azure Resource Manager is case‑insensitive, and Microsoft confirms that API responses may return different casings for the same resource name or ID. Terraform AzureRM provider is treating the casing difference as a literal change, causing an unnecessary destroy/create cycle rather than normalizing the ID.

In our situation, even a small change in the casing of the private_dns_zone_id results in Terraform proposing a full replacement of the entire AKS cluster. This isn’t just an inconvenience — it triggers major disruption across our infrastructure and operations, especially in our production environment, where stability is absolutely critical.

We have tried to raise this to Microsoft Support and here is their feedback:

What is happening?
You are seeing Terraform mark resources for replacement due to a difference in casing in the Azure Resource ID. For example:

  • Terraform state holds: .../resourcegroups/.../microsoft.containerregistry/...
  • Azure API returns: .../resourceGroups/.../Microsoft.ContainerRegistry/...
    Although both refer to the exact same resource, Terraform treats them as different values and proposes a replacement, which is unexpected and disruptive.

Why is this happening?
This is a known behaviour in the Terraform AzureRM provider. The Azure ARM API is case-insensitive (meaning resourcegroups and resourceGroups point to the same resource), but it does not always return values in a consistent casing. The Terraform AzureRM provider, however, compares resource ID values as case-sensitive strings. When there is a mismatch in casing between what is stored in Terraform state and what the Azure API returns, Terraform incorrectly treats it as a change and forces a replacement.

What is recommend by MS
We recommend raising a new issue directly with HashiCorp's Terraform team on GitHub.

Terraform Version

1.14.0

AzureRM Provider Version

3.54

Affected Resource(s)/Data Source(s)

azurerm_kubernetes_cluster, azurerm_role_assignment, etc.

Terraform Configuration Files

resource "azurerm_kubernetes_cluster" "this" {

  name                                = 
  location                            = 
  resource_group_name                 = 
  dns_prefix                          = 
  private_dns_zone_id                 = 
  private_cluster_public_fqdn_enabled = 
  kubernetes_version                  = 
  private_cluster_enabled             = true
  role_based_access_control_enabled   = true
  run_command_enabled                 = false
  local_account_disabled              = true 
  node_resource_group                 = 
  sku_tier                            = 
  workload_identity_enabled           = true 
  oidc_issuer_enabled                 = true 
  image_cleaner_enabled               = true
  image_cleaner_interval_hours        = 24
  azure_active_directory_role_based_access_control {
    azure_rbac_enabled = true
    tenant_id          = 
  }
  network_profile {
    network_plugin      = "azure"
    network_plugin_mode = "overlay"
    pod_cidr            =   # Avoid overlap with cidr of svc and nodes
    load_balancer_sku   = "standard"
    dns_service_ip      =   # Based on the service cidr
    service_cidr        =   # Avoid overlap with cidr of pods and nodes
    network_policy      = "cilium"
    network_data_plane  = "cilium"
  }
  web_app_routing {
    dns_zone_ids             = 
    default_nginx_controller = "None"
  }
  monitor_metrics {
      annotations_allowed = 
      labels_allowed      = 
  }
  identity {
    type         = "UserAssigned" 
    identity_ids = 

  kubelet_identity {
    client_id                 = 
    object_id                 = 
    user_assigned_identity_id = 
  }
  default_node_pool {
    name                         = "system"
    auto_scaling_enabled         = true
    min_count                    = 
    max_count                    = 
    vm_size                      = 
    zones                        = 
    os_sku                       = 
    type                         = "VirtualMachineScaleSets"
    max_pods                     = 
    node_labels                  = 
    vnet_subnet_id               = 
    orchestrator_version         = 
    only_critical_addons_enabled = true
    host_encryption_enabled      = true
    temporary_name_for_rotation  = "tempsystem"
    upgrade_settings {
      drain_timeout_in_minutes      = 0
      max_surge                     = "10%"
      node_soak_duration_in_minutes = 0
    }
  }
  http_proxy_config {
    http_proxy  = 
    https_proxy = 
    no_proxy    = 
    trusted_ca  = 
  }
  oms_agent {
      log_analytics_workspace_id      = 
      msi_auth_for_monitoring_enabled = true
  }
  microsoft_defender {
    log_analytics_workspace_id = 
  }
  key_vault_secrets_provider {
    secret_rotation_enabled = true # To enable the secret store CSI driver on the AKS cluster.
  }
  storage_profile {
    blob_driver_enabled         = true # To enable the Blob CSI driver on the AKS cluster.
    disk_driver_enabled         = true # To enable the Disk CSI driver on the AKS cluster.
    file_driver_enabled         = true # To enable the File CSI driver on the AKS cluster.
    snapshot_controller_enabled = true # To enable the Snapshot controller.
  }
  azure_policy_enabled = true # To enable and use Azure Policy with Kubernetes cluster
  
  automatic_upgrade_channel = "patch"     # for kubernetes upgrade
  node_os_upgrade_channel   = "NodeImage" # for node image upgrade
  timeouts {
    create = "60m"
  }
  maintenance_window_auto_upgrade {
    frequency   = 
    interval    = 
    duration    = 
    day_of_week = 
    week_index  = 
    start_time  = 
    utc_offset  = 
  }
  maintenance_window_node_os {
    frequency   = 
    interval    = 
    duration    = 
    day_of_week = 
    start_time  = 
    utc_offset  = 
  }
}

Debug Output/Panic Output

# module.aks_overlay.azurerm_kubernetes_cluster.this must be replaced
-/+ resource "azurerm_kubernetes_cluster" "this" {

~ private_dns_zone_id                 = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-C_000000-westeurope-P00000-00-0-000-0-NaC/providers/Microsoft.Network/privateDnsZones/00-0-000-0.privatelink.westeurope.azmk8s.io" -> "/subscriptions/0000000-0000-0000-0000-000000000000/resourceGroups/rg-c_00000-westeurope-p00000-00-0-000-0-nac/providers/Microsoft.Network/privateDnsZones/00-0-000-0.privatelink.westeurope.azmk8s.io" # forces replacement

}


#summarized version - showing only the cause of replacement

All uppercase or mixed-case letters were converted to lowercase:
C → c
P00000 → p00000
NaC → nac

Expected Behaviour

If it is just rerun of terraform and no changes in the value of terraform arguments then change in casing should not force replacement of azure resources.

Actual Behaviour

Due to a small change in casing in resource ID, the terraform forces a replacement . In our situation, even a small change in the casing of the private_dns_zone_id results in Terraform proposing a full replacement of the entire AKS cluster. This isn’t just an inconvenience — it triggers major disruption across our infrastructure and operations, especially in our production environment, where stability is absolutely critical.

Steps to Reproduce

No response

Important Factoids

No response

References

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions