feat: make Kubernetes webhook server port configurable#5769
feat: make Kubernetes webhook server port configurable#5769audip wants to merge 1 commit intoakuity:mainfrom
Conversation
✅ Deploy Preview for docs-kargo-io ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
358a321 to
2fb2d4e
Compare
The webhook server port was hardcoded to 9443 in the Go binary, Deployment template, and Service template. This makes it impossible to change without forking when the default port conflicts with other services on the same nodes. Add a WEBHOOK_SERVER_PORT env var (default "9443") read by the Go binary and a webhooksServer.port Helm value that flows through the ConfigMap, Deployment containerPort, and Service targetPort. Fully backward compatible — existing installations see no change. Co-authored-by: Cursor <cursoragent@cursor.com> Signed-off-by: Aditya Purandare <aditya.p1993@hotmail.com>
2fb2d4e to
9326d65
Compare
|
The motivation cited does not add up. It listens on 9443 in its own pod separate from any other webhook servers you may have running. There is no competition for the port. |
Hi, thanks for the quick review. You're absolutely right that under normal circumstances there’s no port contention since the webhook server runs in its own Pod network namespace. However, in our environment (EKS with Cilium CNI), we run the webhook server with hostNetwork: true. In that configuration, the container shares the node’s network namespace, so ports must be unique across all host-networked Pods on the same node. Since 9443 is a commonly used webhook port (many operators default to it), we can encounter real port conflicts at the node level. In that setup, the port is no longer isolated per Pod — it becomes a host-level binding — which is why configurability becomes necessary. This PR:
Even if hostNetwork is not a common deployment mode, making the port configurable keeps the chart and binary more flexible with minimal complexity and no backward compatibility impact. These are some open-source projects that allow webhook server port to be configurable:
Happy to clarify further or adjust the approach if you'd prefer a different configuration mechanism. |
|
Thanks for additional context @audip. That clarifies a lot. I'm afraid I don't know much about Cilium, so excuse me if this question seems remedial. Is it common with Cilium to run webhook servers with host networking? Is this done for many other workloads as well? Or is it something you're doing specifically for webhook servers? Just trying to gather more information about how widespread an issue this may be and whether it might justify extending this same degree of port configurability to other Kargo components. |
|
Great question @krancour . Cilium itself doesn't force This mostly affects admission webhook servers specifically, since they're the components the API server must reach synchronously. Regular controllers and app workloads use normal pod networking and don't have this issue. The port conflict problem is real and common: port 9443 is a popular webhook default shared by cert-manager, OPA Gatekeeper, Kyverno, and others. When multiple webhook servers run with |
Fix for #5768
Summary
The webhook server port is hardcoded to
9443in the Go binary, Deployment template, and Service template, making it impossible to change without forking. This PR makes it configurable via a newwebhooksServer.portHelm value.Changes
cmd/controlplane/kubernetes_webhooks.go: AddedWebhookServerPortfield to the options struct. ReadsWEBHOOK_SERVER_PORTenv var (default"9443") incomplete(), consistent with howKUBE_API_QPS,KUBE_API_BURST, andMETRICS_BIND_ADDRESSare handled. Uses the field inwebhook.Options{Port: ...}instead of the hardcoded value.charts/kargo/values.yaml: AddedwebhooksServer.port: 9443.charts/kargo/templates/kubernetes-webhooks-server/configmap.yaml: AddedWEBHOOK_SERVER_PORTentry from the new Helm value.charts/kargo/templates/kubernetes-webhooks-server/deployment.yaml: ChangedcontainerPort: 9443tocontainerPort: {{ .Values.webhooksServer.port }}.charts/kargo/templates/kubernetes-webhooks-server/service.yaml: ChangedtargetPort: 9443totargetPort: {{ .Values.webhooksServer.port }}.Motivation
When running Kargo alongside other operators that also bind their webhook servers to port 9443, there is no way to resolve the conflict. This change lets operators set a custom port (e.g.
10288) via Helm values while remaining fully backward compatible — existing installations default to9443with no change in behavior.Test plan
go build ./cmd/controlplane/compiles cleanly (confirmed locally)webhooksServer.port: 10288and confirm webhook server listens on 10288, Service targetPort is 10288, and webhook admission calls succeed through the Servicehelm templatewith default values produces identical output to current main (except for the new ConfigMap entry)helm templatewithwebhooksServer.port: 10288produces correct containerPort and targetPort