Skip to content

aws_msk_configuration failed during AWS MSK version upgrade #16

Closed as not planned
@ascpikmin

Description

@ascpikmin

Description

When I try to update the Kafka version on the module, the aws_msk_configuration resource fails because this version change requires its destruction, and this is not possible because it is being used by the msk cluster.

If your request is for a new feature, please use the Feature request template.

  • ✋ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]: 2.3.0

  • Terraform version: 1.6.3

  • Provider version(s):
    provider registry.terraform.io/hashicorp/aws v5.26.0
    provider registry.terraform.io/hashicorp/random v3.5.1

Reproduction Code [Required]

module "msk_cluster" {

  depends_on = [module.s3_bucket_for_logs, module.cluster_sg, module.kms]
  source     = "github.com/terraform-aws-modules/terraform-aws-msk-kafka-cluster?ref=v2.3.0"

  name                   = local.msk_cluster_name
  kafka_version          = var.kafka_version
  number_of_broker_nodes = var.number_of_broker_nodes
  enhanced_monitoring    = var.enhanced_monitoring

  broker_node_client_subnets  = var.broker_node_client_subnets
  broker_node_instance_type   = var.broker_node_instance_type
  broker_node_security_groups = concat([
    for sg in module.cluster_sg :sg.security_group_id
  ], var.extra_security_groups_ids)

  broker_node_storage_info = {
    ebs_storage_info = { volume_size = var.volume_size }
  }

  encryption_in_transit_client_broker = var.encryption_in_transit_client_broker
  encryption_in_transit_in_cluster    = var.encryption_in_transit_in_cluster
  encryption_at_rest_kms_key_arn      = module.kms.key_arn

  jmx_exporter_enabled                   = var.jmx_exporter_enabled
  node_exporter_enabled                  = var.node_exporter_enabled
  cloudwatch_logs_enabled                = var.cloudwatch_logs_enabled
  s3_logs_enabled                        = var.s3_logs_enabled
  s3_logs_bucket                         = module.s3_bucket_for_logs.s3_bucket_id
  s3_logs_prefix                         = var.s3_logs_prefix
  cloudwatch_log_group_retention_in_days = var.cloudwatch_log_group_retention_in_days
  cloudwatch_log_group_kms_key_id        = var.cloudwatch_log_group_kms_key_id
  configuration_server_properties        = var.configuration_server_properties
  configuration_name                     = "${local.msk_cluster_name}-${replace(var.kafka_version,".","-")}"
  configuration_description              = local.msk_cluster_name

  tags = merge(
    var.tags,
    {
      Name = local.msk_cluster_name
    }
  )
}

Steps to reproduce the behavior:

Expected behavior

That the new aws_msk_configuration resource be created before deleting the old one

Actual behavior

The previous aws_msk_configuration resource tries to be deleted before creating the new one and it cannot because it is being used by the cluster

Terminal Output Screenshot(s)

module.msk_cluster.aws_msk_configuration.this[0]: Destroying... [id=arn:aws:kafka:eu-west-1:xxxxxxxxxxxxxx:configuration/example/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx]

Error: deleting MSK Configuration (arn:aws:kafka:eu-west-1:xxxxxxxxxxxxxx:configuration/example/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx): BadRequestException: Configuration is in use by one or more clusters. Dissociate the configuration from the clusters.
 {
   RespMetadata: {
     StatusCode: 400,
     RequestID: "0bb9fe5d-ee26-4dad-8a81-8c3fa6c06483"
   },
   InvalidParameter: "arn",
   Message_: "Configuration is in use by one or more clusters. Dissociate the configuration from the clusters."
 }

Additional context

If you set the configuration_name parameter to a dynamic name and manually change the aws_msk_configuration resource and add a lifecycle {create_before_destroy = true}, it updates successfully, so I don't know if this would be the solution.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions