Skip to content

S3 module not accepting given lifecycle rule #313

Open
@alishah730

Description

@alishah730

Description

Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.

If your request is for a new feature, please use the Feature request template.

  • ✋ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]: version = "4.6.0"

  • Terraform version: Terraform v1.10.5

  • Provider version(s): provider registry.terraform.io/hashicorp/aws v5.86.1

Reproduction Code [Required]

terraform {
  required_version = ">= 1.0"


  required_providers {
    aws = {
      #checkov:skip=CKV_TF_1
      source  = "hashicorp/aws"
      version = "5.86.1"
    }
  }
}

# This provider is to deploy all regional resources like Lambda functions, VPCs etc.
provider "aws" {
  region = "us-east-1"
}


module "s3_provisioning_tfstate_bucket" {
  #checkov:skip=CKV_TF_1
  source  = "terraform-aws-modules/s3-bucket/aws"
  version = "4.6.0"
  bucket  = "ali-module-s3-test" # Namespace, AWS A/C Id & Region are added to the bucket name to make it unique
  acl     = "private"

  control_object_ownership = true
  object_ownership         = "ObjectWriter"
  force_destroy            = true

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true

  versioning = {
    enabled = true
  }

  server_side_encryption_configuration = {
    rule = {
      apply_server_side_encryption_by_default = {
        # kms_master_key_id = module.kms.key_arn
        sse_algorithm = "AES256" #"aws:kms"
      }
    }
  }

  lifecycle_rule = [
    {
      id      = "provisionSfnExecutionLogs"
      enabled = true

      filter = {
        prefix = "provisionSfnExecutionLogs/"
      }

      expiration = {
        days                         = 7
        expired_object_delete_marker = true
      }

      noncurrent_version_expiration = {
        newer_noncurrent_versions = 1
        days                      = 7
      }
    }
  ]
}


Steps to reproduce the behavior:

no yes

terraform init
terraform plan
terraform apply

Expected behavior

its should create an S3 bucker as per the given definition

Actual behavior

its giving error in s3 lifecycle rule

Error: Provider produced inconsistent result after apply
│ 
│ When applying changes to module.s3_ali_state_bucket.aws_s3_bucket_lifecycle_configuration.this[0], provider "provider[\"registry.terraform.io/hashicorp/aws\"]" produced an unexpected new value:
│ .rule[0].expiration[0].expired_object_delete_marker: was cty.True, but now cty.False.
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

Terminal Output Screenshot(s)

After terrafrom apply

Image

Additional context

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions