-
Notifications
You must be signed in to change notification settings - Fork 2
Description
Thanks for this great work and welcoming new ideas in #25 (comment) !
In our project, we need to include ingresses from multiple k8s clusters into our homer dashboard, not only the cluster where the homer-operator is deployed. This enables using the homer features (such as search) on discovered ingresses/routes from multiple clusters. The alternative of deploying homer-operator is each k8s cluster would not enable such global search for all ingresses/routes.
The suggestion would be that homer operator accepts a list of clusters to perform the discovery on. This could be a an array of secret references, pointing to secret containing kubeconfig.
See https://fluxcd.io/flux/components/kustomize/kustomizations/#kubeconfig-remote-clusters for inspiration on how fluxcd kustomize controller supports specifying a remote cluster kubeconfig
.spec.kubeConfig.secretRef: Secret-based authentication using a static kubeconfig stored in a Kubernetes Secret in the same namespace as the Kustomization.
--- apiVersion: v1 kind: Secret metadata: name: prod-kubeconfig type: Opaque stringData: value.yaml: | apiVersion: v1 kind: Config # ...omitted for brevityNote: The KubeConfig should be self-contained and not rely on binaries, environment, or credential files from the kustomize-controller Pod.
Currently, the service discovery is performed by a single service account projected token interacting with the current cluster
homer-operator/charts/homer-operator/values.yaml
Lines 51 to 55 in f61a096
| serviceAccount: | |
| create: true | |
| automount: true | |
| annotations: {} | |
| name: "" |
| {{- if .Values.serviceAccount.create -}} | |
| --- | |
| apiVersion: v1 | |
| kind: ServiceAccount | |
| metadata: | |
| name: {{ include "homer-operator.serviceAccountName" . }} | |
| namespace: {{ include "homer-operator.namespace" . }} | |
| labels: | |
| {{- include "homer-operator.labels" . | nindent 4 }} | |
| {{- with .Values.serviceAccount.annotations }} | |
| annotations: | |
| {{- toYaml . | nindent 4 }} | |
| {{- end }} | |
| automountServiceAccountToken: {{ .Values.serviceAccount.automount }} | |
| {{- end }} |
which injects the service account using service account token projection https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#launch-a-pod-using-service-account-token-projection
| serviceAccountName: {{ include "homer-operator.serviceAccountName" . }} |
The new feature would require to create for each remote k8s clusters, new k8s clients and ingress/httproute controllers for the service discovery, whereas current code creates a single client and ingress/httproute controllers
Lines 116 to 134 in f61a096
| ingressController := &controller.GenericResourceReconciler{ | |
| Client: mgr.GetClient(), | |
| Scheme: mgr.GetScheme(), | |
| } | |
| if err = ingressController.SetupIngressController(mgr); err != nil { | |
| setupLog.Error(err, "unable to create controller", "controller", "Ingress") | |
| os.Exit(1) | |
| } | |
| if enableGatewayAPI { | |
| httpRouteController := &controller.GenericResourceReconciler{ | |
| Client: mgr.GetClient(), | |
| Scheme: mgr.GetScheme(), | |
| } | |
| if err = httpRouteController.SetupHTTPRouteController(mgr); err != nil { | |
| setupLog.Error(err, "unable to create controller", "controller", "HTTPRoute") | |
| os.Exit(1) | |
| } | |
| } |