Description
Cluster API (which is used inside VMware Tanzu Kubernetes Grid) keeps track of clusters via a Cluster
CRD and an associated Secret
resource that contains a kubeconfig for the cluster. These resources are defined (and reconciled from) a "management cluster" that is under administrator control.
The Cluster
resource can be labeled, which means you can define multi-cluster "things" (like a GSLB policy) in terms of label-selectors across those resources. As clusters come and go, your multi-cluster thing can be informed, and update dynamically.
So brainstorming on an integration, maybe it looks a bit like...
kind: GSLBConfig
metadata:
name: gslb-config
namespace: avi-system
spec:
gslbLeader:
credentials: gslb-avi-secret
controllerVersion: 18.2.9
controllerIP: 10.10.10.10
- memberClusters:
- - clusterContext: cluster1-admin
- - clusterContext: cluster2-admin
+ clusterApiMembers:
+ clusterSelector:
+ gslb: enabled
And similarly allowing a label-selector within the GlobalDeploymentPolicy.
Pushing that a bit further, there's a pattern in Cluster API deployments where an Admin uses namespaces within the management cluster to provide isolation. So, for example, within the management cluster, the namespace tenant-1
can have Cluster resources named cluster-a
, cluster-b
and in a different management namespace tenant-2
can also have Cluster resources named cluster-a
and cluster-b
. This can be used to provide network isolation between the tenants. Depending on use-case, I could see the GSLBConfig
being scoped within one of those management-cluster namespaces, or potentially being able to span them. For example:
kind: GSLBConfig
metadata:
name: gslb-config
namespace: avi-system
spec:
gslbLeader:
credentials: gslb-avi-secret
controllerVersion: 18.2.9
controllerIP: 10.10.10.10
clusterApiMembers:
clusterSelector:
gslb: enabled
+ managementNamespaceSelector:
+ env: prod