-
Notifications
You must be signed in to change notification settings - Fork 13
Description
Code of Conduct
- I have read and agree to the project's Code of Conduct.
- Vote on this issue by adding a 👍 reaction to the original issue initial description to help the maintainers prioritize.
- Do not leave "+1" or other comments that do not add relevant information or questions.
- If you are interested in working on this issue or have submitted a pull request, please leave a comment.
Description
Workload domain terraform code supports static pools for VTEPs only at domain creation time. When adding additional clusters to an existing domain, the cluster resource only supports DHCP for VTEP addressing. This creates an inconsistency with environments where DHCP is intentionally not used on the Geneve VLAN and prevents users from scaling domains with multiple clusters while keeping a uniform IP-assignment strategy.
Enabling pool-based VTEP assignment at cluster creation would align the provider with existing domain-level capabilities and with NSX-T operational best practices where Transport Nodes can be provisioned from IP Pools (via Transport Node Profiles / Host Switch Profiles).
Use Case(s)
-
DHCP-less Fabrics
Providers that prohibit DHCP on overlay VLANs need to add clusters to an existing domain but cannot, because the provider insists on DHCP for new clusters’ VTEPs. -
Consistency Across Lifecycle
A domain initially created with an NSX VTEP IP Pool should be able to grow with additional clusters using the same or another pool - without switching to DHCP mid-lifecycle. -
Regulatory/Operational Constraints
In regulated or offline environments, DHCP services on the Geneve VLAN are disallowed. Static pools are the only compliant option.
Potential Configuration
Below is a proposed, backward-compatible addition to the vcf_cluster (or equivalent) resource that mirrors how the domain resource already handles VTEP pools.
Reference an existing pool (back-compat friendly)
resource "vcf_cluster" "cl02" {
workload_domain_id = vcf_domain.example.id
name = "cl02"
nsx_vtep_ip_assignment {
mode = "IPPOOL" # default remains "DHCP"
ip_pool_id = "f1b1a6a0-1234-4c77-9f6a-abcde0123456"
# or:
# ip_pool_name = "static-ip-pool-01"
}
}
Define the pool inline (create-if-missing)
resource "vcf_cluster" "cl02" {
workload_domain_id = vcf_domain.example.id
name = "cl02"
nsx_vtep_ip_assignment {
mode = "IPPOOL"
# If ip_pool_id/name not provided, provider creates/ensures this pool in NSX:
ip_address_pool {
name = "static-ip-pool-01"
subnet {
cidr = "10.0.11.0/24"
gateway = "10.0.11.250"
ip_address_pool_range {
start = "10.0.11.50"
end = "10.0.11.70"
}
ip_address_pool_range {
start = "10.0.11.80"
end = "10.0.11.150"
}
}
}
}
}
Mode defaults to "DHCP" (no breaking change).
If mode == "IPPOOL":
- Accept exactly one of:
- ip_pool_id / ip_pool_name (reference), or
- ip_address_pool { … } (inline definition)
References
NSX-T Data Center: Transport Node VTEP addressing via IP Pools (Transport Node Profiles / Host Switch Profiles).
Docs section: https://techdocs.broadcom.com/us/en/vmware-tanzu/standalone-components/tanzu-kubernetes-grid-integrated-edition/1-20/tkgi/nsxt-install-vtep.html
VMware Cloud Foundation / SDDC Manager API: Cluster creation/expansion endpoints that accept network intent for VTEP assignment; parity with domain-level “static pool” options.
Docs section: Create Cluster in Workload Domain; Network settings for NSX-T VTEPs. (ipAddressPoolsSpec)
Terraform VCF Provider: Existing domain resource fields that already support static pool for VTEPs - this request proposes analogous fields at the cluster resource level for consistent UX.
Clusters part of the domain can be configured to use IP address pools to assign IP addresses for the TEP interfaces of the hosts by specifying IpAddressPoolSpec inside the NsxTClusterSpec. If the IpAddressPoolSpec is not specified in the input spec, IP addresses for the TEP interfaces of the host are assigned from DHCP.
https://registry.terraform.io/providers/vmware/vcf/latest/docs/resources/domain#nestedblock--cluster--ip_address_pool
When we creating Cluster using JSON specs and API of SDDC Manager we use that construct to point to existing ipAddressPool:
"nsxClusterSpec": {
"nsxTClusterSpec": {
"geneveVlanId": 1234,
"ipAddressPoolsSpec": [
{
"name": "AZ1-SiteA-v3-03-tep"
}
],
"uplinkProfiles": [ {
"name": "AZ1-SiteA-v3-03-UplinkProfile",
"teamings": [ {
"name": "DEFAULT",
"activeUplinks": [ "uplink-1", "uplink-2" ],
"policy": "LOADBALANCE_SRCID",
"standByUplinks": [ ]
}
],
"transportVlan": 1234
} ]
}
},