Skip to content

feat: make Kubernetes webhook server port configurable#5769

Open
audip wants to merge 1 commit intoakuity:mainfrom
audip:feat/configurable-webhook-server-port
Open

feat: make Kubernetes webhook server port configurable#5769
audip wants to merge 1 commit intoakuity:mainfrom
audip:feat/configurable-webhook-server-port

Conversation

@audip
Copy link

@audip audip commented Feb 20, 2026

Fix for #5768

Summary

The webhook server port is hardcoded to 9443 in the Go binary, Deployment template, and Service template, making it impossible to change without forking. This PR makes it configurable via a new webhooksServer.port Helm value.

Changes

  • cmd/controlplane/kubernetes_webhooks.go: Added WebhookServerPort field to the options struct. Reads WEBHOOK_SERVER_PORT env var (default "9443") in complete(), consistent with how KUBE_API_QPS, KUBE_API_BURST, and METRICS_BIND_ADDRESS are handled. Uses the field in webhook.Options{Port: ...} instead of the hardcoded value.
  • charts/kargo/values.yaml: Added webhooksServer.port: 9443.
  • charts/kargo/templates/kubernetes-webhooks-server/configmap.yaml: Added WEBHOOK_SERVER_PORT entry from the new Helm value.
  • charts/kargo/templates/kubernetes-webhooks-server/deployment.yaml: Changed containerPort: 9443 to containerPort: {{ .Values.webhooksServer.port }}.
  • charts/kargo/templates/kubernetes-webhooks-server/service.yaml: Changed targetPort: 9443 to targetPort: {{ .Values.webhooksServer.port }}.

Motivation

When running Kargo alongside other operators that also bind their webhook servers to port 9443, there is no way to resolve the conflict. This change lets operators set a custom port (e.g. 10288) via Helm values while remaining fully backward compatible — existing installations default to 9443 with no change in behavior.

Test plan

  • Verify go build ./cmd/controlplane/ compiles cleanly (confirmed locally)
  • Deploy with default values and confirm webhook server listens on 9443 (no behavioral change)
  • Deploy with webhooksServer.port: 10288 and confirm webhook server listens on 10288, Service targetPort is 10288, and webhook admission calls succeed through the Service
  • helm template with default values produces identical output to current main (except for the new ConfigMap entry)
  • helm template with webhooksServer.port: 10288 produces correct containerPort and targetPort

@audip audip requested a review from a team as a code owner February 20, 2026 20:01
@netlify
Copy link

netlify bot commented Feb 20, 2026

Deploy Preview for docs-kargo-io ready!

Name Link
🔨 Latest commit 9326d65
🔍 Latest deploy log https://app.netlify.com/projects/docs-kargo-io/deploys/6998bed1defffa0008ec91d9
😎 Deploy Preview https://deploy-preview-5769.docs.kargo.io
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@audip audip force-pushed the feat/configurable-webhook-server-port branch from 358a321 to 2fb2d4e Compare February 20, 2026 20:05
The webhook server port was hardcoded to 9443 in the Go binary,
Deployment template, and Service template. This makes it impossible to
change without forking when the default port conflicts with other
services on the same nodes.

Add a WEBHOOK_SERVER_PORT env var (default "9443") read by the Go binary
and a webhooksServer.port Helm value that flows through the ConfigMap,
Deployment containerPort, and Service targetPort. Fully backward
compatible — existing installations see no change.

Co-authored-by: Cursor <cursoragent@cursor.com>
Signed-off-by: Aditya Purandare <aditya.p1993@hotmail.com>
@audip audip force-pushed the feat/configurable-webhook-server-port branch from 2fb2d4e to 9326d65 Compare February 20, 2026 20:06
@krancour
Copy link
Member

The motivation cited does not add up. It listens on 9443 in its own pod separate from any other webhook servers you may have running. There is no competition for the port.

@audip
Copy link
Author

audip commented Feb 23, 2026

The motivation cited does not add up. It listens on 9443 in its own pod separate from any other webhook servers you may have running. There is no competition for the port.

Hi, thanks for the quick review.

You're absolutely right that under normal circumstances there’s no port contention since the webhook server runs in its own Pod network namespace.

However, in our environment (EKS with Cilium CNI), we run the webhook server with hostNetwork: true. In that configuration, the container shares the node’s network namespace, so ports must be unique across all host-networked Pods on the same node. Since 9443 is a commonly used webhook port (many operators default to it), we can encounter real port conflicts at the node level.

In that setup, the port is no longer isolated per Pod — it becomes a host-level binding — which is why configurability becomes necessary.

This PR:

  • Does not change the default behavior (still 9443).
  • Follows the existing pattern used for other configurable server settings (e.g., METRICS_BIND_ADDRESS).
  • Only introduces flexibility for environments that require non-default networking modes.

Even if hostNetwork is not a common deployment mode, making the port configurable keeps the chart and binary more flexible with minimal complexity and no backward compatibility impact. These are some open-source projects that allow webhook server port to be configurable:

  • github-action-runners helm chart
  • trust-manager helm chart
  • and there are many others like this

Happy to clarify further or adjust the approach if you'd prefer a different configuration mechanism.

@krancour
Copy link
Member

Thanks for additional context @audip. That clarifies a lot.

I'm afraid I don't know much about Cilium, so excuse me if this question seems remedial. Is it common with Cilium to run webhook servers with host networking?

Is this done for many other workloads as well? Or is it something you're doing specifically for webhook servers?

Just trying to gather more information about how widespread an issue this may be and whether it might justify extending this same degree of port configurability to other Kargo components.

@audip
Copy link
Author

audip commented Feb 24, 2026

Great question @krancour . Cilium itself doesn't force hostNetwork on other workloads, but when running Cilium in kube-proxy replacement mode (which is a popular mode for EKS & Cilium), admission webhooks often need hostNetwork: true to be reachable during node bootstrap — before Cilium has fully initialized pod networking. It's a chicken-and-egg problem: the API server needs to call webhooks for pod admission, but pod networking isn't ready yet.

This mostly affects admission webhook servers specifically, since they're the components the API server must reach synchronously. Regular controllers and app workloads use normal pod networking and don't have this issue.

The port conflict problem is real and common: port 9443 is a popular webhook default shared by cert-manager, OPA Gatekeeper, Kyverno, and others. When multiple webhook servers run with hostNetwork on the same node, they collide. This is well-documented across the ecosystem — for example, cert-manager and Kyverno both make their webhook port configurable for exactly this reason.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants