-
Notifications
You must be signed in to change notification settings - Fork 13
Description
Code of Conduct
- I have read and agree to the Code of Conduct.
- Vote on this issue by adding a 👍 reaction to the original issue initial description to help the maintainers prioritize.
- Do not leave "+1" or other comments that do not add relevant information or questions.
- If you are interested in working on this issue or have submitted a pull request, please leave a comment.
Terraform
v1.12.2
Terraform Provider
v0.17.0
VMware Cloud Foundation
v9.0.0
Description
I am unable to create a new workload domain and receive this error message:
"Cluster life cycle management should be vSphere Lifecycle Manager Image"
I didn't find any mention of this in the documentation.
In addition to this, the following parameters are noted as optional in the documentation: vm_size and storage_size from the nested schema vcenter_configuration, but without them, an input error is obtained during the Terraform apply. By the way, regarding this parameters, we don't have the option to choose default as the storage size. Is this a new requirement of VCF 9.0?
I didn't see a parameter for configuring the vSphere supervisor. I don't know yet if this is a blocker, but having it could be helpful.
Affected Resources or Data Sources
resource/vcf_domain
Terraform Configuration
terraform {
required_providers {
vcf = {
source = "vmware/vcf"
}
}
}
provider "vcf" {
sddc_manager_host = var.sddc_manager_host
sddc_manager_username = var.sddc_manager_username
sddc_manager_password = var.sddc_manager_password
allow_unverified_tls = true
}
resource "vcf_network_pool" "domain_pool" {
name = "wld-pool"
network {
gateway = "192.168.10.1"
mask = "255.255.255.0"
mtu = 9000
subnet = "192.168.10.0"
type = "VSAN"
vlan_id = 100
ip_pools {
start = "192.168.10.5"
end = "192.168.10.50"
}
}
network {
gateway = "192.168.11.1"
mask = "255.255.255.0"
mtu = 9000
subnet = "192.168.11.0"
type = "vMotion"
vlan_id = 100
ip_pools {
start = "192.168.11.5"
end = "192.168.11.50"
}
}
}
resource "vcf_host" "host1" {
fqdn = var.esx_host1_fqdn
username = "root"
password = var.esx_host1_pass
network_pool_id = vcf_network_pool.domain_pool.id
storage_type = "VSAN"
}
resource "vcf_host" "host2" {
fqdn = var.esx_host2_fqdn
username = "root"
password = var.esx_host2_pass
network_pool_id = vcf_network_pool.domain_pool.id
storage_type = "VSAN"
}
resource "vcf_host" "host3" {
fqdn = var.esx_host3_fqdn
username = "root"
password = var.esx_host3_pass
network_pool_id = vcf_network_pool.domain_pool.id
storage_type = "VSAN"
}
resource "vcf_domain" "domain1" {
name = "sfo-w01-vc01"
sso {
domain_name = "vsphere.local"
domain_password = "****"
}
vcenter_configuration {
name = "test-vcenter"
datacenter_name = "test-datacenter"
root_password = var.vcenter_root_password
ip_address = "172.16.134.59"
subnet_mask = "255.255.255.192"
gateway = "172.16.134.1"
fqdn = "sfo-w01-vc01.vcf.lab"
vm_size = "small"
storage_size = "lstorage"
}
nsx_configuration {
vip = "172.16.134.58"
vip_fqdn = "sfo-w01-nsx01.vcf.lab"
nsx_manager_admin_password = var.nsx_manager_admin_password
nsx_manager_node {
name = "sfo-w01-nsx01a"
ip_address = "172.16.134.57"
fqdn = "sfo-w01-nsx01a.vcf.lab"
subnet_mask = "255.255.255.192"
gateway = "172.16.134.1"
}
}
cluster {
name = "sfo-w01-cl01"
host {
id = vcf_host.host1.id
vmnic {
id = "vmnic0"
vds_name = "sfo-w01-cl01-vds01"
}
vmnic {
id = "vmnic1"
vds_name = "sfo-w01-cl01-vds01"
}
}
host {
id = vcf_host.host2.id
vmnic {
id = "vmnic0"
vds_name = "sfo-w01-cl01-vds01"
}
vmnic {
id = "vmnic1"
vds_name = "sfo-w01-cl01-vds01"
}
}
host {
id = vcf_host.host3.id
vmnic {
id = "vmnic0"
vds_name = "sfo-w01-cl01-vds01"
}
vmnic {
id = "vmnic1"
vds_name = "sfo-w01-cl01-vds01"
}
}
vds {
name = "sfo-w01-cl01-vds01"
portgroup {
name = "sfo-w01-cl01-vds01-pg-mgmt"
transport_type = "MANAGEMENT"
}
portgroup {
name = "sfo-w01-cl01-vds01-pg-vsan"
transport_type = "VSAN"
}
portgroup {
name = "sfo-w01-cl01-vds01-pg-vmotion"
transport_type = "VMOTION"
}
}
vsan_datastore {
datastore_name = "sfo-w01-cl01-ds-vsan01"
failures_to_tolerate = 0
}
geneve_vlan_id = 200
}
}Debug Output
2025-06-22T12:18:41.559Z [ERROR] provider.terraform-provider-vcf_v0.17.0: Response contains error diagnostic: diagnostic_detail="Cluster life cycle management should be vSphere Lifecycle Manager Image" diagnostic_severity=ERROR diagnostic_summary="Empty Summary: This is always a bug in the provider and should be reported to the provider developers." tf_proto_version=6.9 tf_provider_addr=registry.terraform.io/vmware/vcf tf_rpc=ApplyResourceChange @module=sdk.proto tf_req_id=7e95b963-1193-2450-2bd6-b6e96f9d41c6 tf_resource_type=vcf_domain @caller=github.com/hashicorp/[email protected]/tfprotov6/internal/diag/diagnostics.go:58 timestamp=2025-06-22T12:18:41.559Z
Panic Output
No response
Expected Behavior
To have the option to configure the vSphere Lifecycle Manager cluster image. Achieve the creation of the workload domain.
To have an error message during the Terraform plan step to check if the vm_size and storage_size are present.
To have an option to configure vSphere Supervisor.
Actual Behavior
It's failing, so it's not building the resource.
Steps to Reproduce
terraform apply
Environment Details
No response
Screenshots
No response
References
No response