Skip to content

Deploy control-plane across multiple vCenters (vCenter Enhanced Linked Mode) #3625

@jisnardo

Description

@jisnardo

Hi, I was able to deploy worker nodes across 3 different vcenters but I have not foud the way to set different server for CP.

Could be possible?

https://cloud-provider-vsphere.sigs.k8s.io/tutorials/deploying_cpi_with_multi_dc_vc_aka_zones

Imagine this:

vsphere1 (location1)

master1
master3
worker-vsphere1
worker-vsphere1
worker-vsphere1

vsphere2 (location2)

master2 ---> (location1, expected location2)
worker-vsphere2
worker-vsphere2
worker-vsphere2

vsphere3 (location3)

master4 ---> (location1, expected location3)
master5 ---> (location1, expected location3)
workers-vsphere3
workers-vsphere3
workers-vsphere3

VSphereDeploymentZone and VSphereFailureDomain like this:

- apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
  kind: VSphereFailureDomain
  metadata:
    name: location1
  spec:
    region:
      autoConfigure: false
      name: location1
      tagCategory: k8s-region
      type: Datacenter
    topology:
      computeCluster: LOCATION1
      datacenter: LOCATION1
      datastore: LOCATION1
      networks:
      - VLAN_NET
    zone:
      autoConfigure: false
      name: location1
      tagCategory: k8s-zone
      type: ComputeCluster

- apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
  kind: VSphereFailureDomain
  metadata:
    name: location2
  spec:
    region:
      autoConfigure: false
      name: location2
      tagCategory: k8s-region
      type: Datacenter
    topology:
      computeCluster: LOCATION2
      datacenter: LOCATION2
      datastore: LOCATION2
      networks:
      - VLAN_NET
    zone:
      autoConfigure: false
      name: location2
      tagCategory: k8s-zone
      type: ComputeCluster
...
- apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
  kind: VSphereDeploymentZone
  metadata:
    name: location1
  spec:
    controlPlane: true
    failureDomain: location1
    placementConstraint:
      resourcePool: LOCATION1/Resources
    server: location1.test.com

- apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
  kind: VSphereDeploymentZone
  metadata:
    name: location2
  spec:
    controlPlane: true
    failureDomain: location2
    placementConstraint:
      resourcePool: LOCATION2/Resources
    server: location2.test.com
...

VSphereCluster has failureDomainSelector: {}

vsphere.conf like this:

https://cloud-provider-vsphere.sigs.k8s.io/cloud_config

apiVersion: v1
data:
  vsphere.conf: |
    global:
      port: 443
      insecureFlag: true
    vcenter:
      location1.test.com:
        secretName: cloud-provider-credentials-location1
        secretNamespace: kube-system
        datacenters:
        - 'LOCATION1'
        server: 'location1.test.com'
      location2.test.com:
        secretName: cloud-provider-credentials-location2
        secretNamespace: kube-system
        datacenters:
        - 'LOCATION02'
        server: 'location2.test.com'
    labels:
      region: k8s-region
      zone: k8s-zone
kind: ConfigMap
metadata:
  name: cloud-config
  namespace: kube-system

Regards.

Metadata

Metadata

Assignees

No one assigned

    Labels

    lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions