Skip to content

Catching the correct state on failed deployment updates #502

@SimonKni

Description

@SimonKni

Code of Conduct

This project has a Code of Conduct that all participants are expected to understand and follow:

vRA Version

  • Aria Automation 8.12.2.31329 (21949837)

Terraform Version

  • v1.4.6

vRA Terraform Provider Version

  • v0.7.3

Affected Resource(s)

  • vra_deployment

Description

At the moment the vra provider catches the deployment last request and not the last successful request to compare the state via provided inputs. Applying the same plan again wouldn't be successfull as current terraform plan and checked inputs on the deployment are matching.

A solution would be to change the hardcoded API from querying inputs on deployments from the current call - which is returning the updated inputs whether the update was successful or not - to getting deployment events API, look up the last successful request, and take {input} there.

so for example switch from "GET /deployment/api/deployments/{id}?apiVersion=2020-08-25&expand=resources&expand=lastRequest" to "GET /deployment/api/deployments/{id}/userEvents?size=100&apiVersion=2020-08-25" and fetch last successful request inputs.

Terraform Configuration Files

config for creating the deployment

resource "vra_deployment" "sk_tf_vm" {
  name        = "sk_tf_vm"
  description = "VM Deployment"

  catalog_item_id = var.vra_catalog_vm_id
  project_id   = var.vra_project_id

  inputs = {
    osImage             = "Ubuntu Server 20.04 LTS"
    cpuCount            = 2
    memoryMB            = 4096
    vmCount             = 1
  }

  timeouts {
    create = "30m"
    delete = "30m"
    update = "30m"
  }
}

config for updating the deployment (invalid value to get a failed update)

resource "vra_deployment" "sk_tf_vm" {
  name        = "sk_tf_vm"
  description = "VM Deployment"

  catalog_item_id = var.vra_catalog_vm_id
  project_id   = var.vra_project_id

  inputs = {
    osImage             = "Ubuntu Server 20.04 LTS"
    cpuCount            = -1
    memoryMB            = 4096
    vmCount             = 1
  }

  timeouts {
    create = "30m"
    delete = "30m"
    update = "30m"
  }
}

Expected Behavior

After a failed update the terraform plan command should show a "Plan: 0 to add, 1 to change, 0 to destroy." again

Actual Behavior

After a failed update the terraform plan command shows a "No changes. Your infrastructure matches the configuration."

Steps to Reproduce

  1. use first config to create a valid vm deployment with terraform plan -out "vra.plan" and terraform apply "vra.plan"
  2. use second config to update the deployment with invalid values. ( again terraform plan -out "vra.plan" and terraform apply "vra.plan")
  3. after returned failure do the same as 2.

Screenshots

N/A

Debug Output

2023-09-01T10:58:15.694+0200 [INFO] provider.terraform-provider-vra_v0.7.3: 2023/09/01 10:58:15 GET /deployment/api/deployments/19314130-ee64-4973-9beb-4a82374267e4?apiVersion=2020-08-25&expand=resources&expand=lastRequest HTTP/1.1: timestamp=2023-09-01T10:58:15.692+0200

Panic Output

N/A

Important Factoids

N/A

References

Reference for the VMware Support: SR23443396206

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions