Conversation
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #1670 +/- ##
==========================================
- Coverage 80.40% 74.30% -6.11%
==========================================
Files 127 170 +43
Lines 16411 21979 +5568
==========================================
+ Hits 13196 16332 +3136
- Misses 3215 5433 +2218
- Partials 0 214 +214
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
f7a9e7d to
9c35298
Compare
Add three optional port fields to PolicyServerSpec: WebhookPort, ReadinessProbePort, and MetricsPort. These use *int32 pointer types so that nil means "use the default" and omitempty keeps the serialized CRD backward-compatible with existing clusters. Helper methods on PolicyServerSpec resolve each field to its default (8443, 8081, 8080 respectively) when not explicitly set by the user. Admission-time validation rejects any PolicyServer whose ports collide with each other (e.g. webhookPort == metricsPort), catching configuration mistakes before they reach the controller. The test factories are extended with builder methods (WithWebhookPort, WithReadinessProbePort, WithMetricsPort) so that both webhook and controller tests can easily construct PolicyServer objects with custom port combinations. Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com> Assisted-by: Github Copilot
When the --host-network flag is enabled, the controller now sets hostNetwork and dnsPolicy on every PolicyServer Deployment and injects targeted pod anti-affinity rules to prevent port conflicts. Three layers of anti-affinity are computed: replicas of the same PolicyServer are forbidden from sharing a node, the controller pod is kept apart from PolicyServers whose ports overlap with the controller webhook, probe, or metrics ports, and distinct PolicyServer instances that share any port are separated from each other. User-provided affinity on the PolicyServer spec fully replaces the auto-generated rules so that operators retain full control. New CLI flags (--webhook-server-port, --health-probe-bind-address, --metrics-bind-address) expose the controller own ports and are forwarded to the reconciler so it can compare them against each PolicyServer port set. The deployment builder now reads the CRD port fields added in the previous commit and passes them through to the policy-server container as environment variables, service targetPorts, and readiness probe configuration. A convenience Makefile target (test-all) chains unit, Helm, and E2E tests for local development. Assisted-by: Github Copilot Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com>
The kubewarden-controller chart gains a hostNetwork toggle and a ports block (webhook, healthProbe, metrics) that feed into both the Deployment template and the Service definitions, so every port is configurable from values.yaml rather than hardcoded. A new "kubewarden-controller.effectiveAffinity" helper injects a required podAntiAffinity rule that prevents two controller replicas from landing on the same node when hostNetwork is enabled, avoiding port collisions on the host interface. When the user supplies their own affinity the helper uses it verbatim, preserving full operator control. The kubewarden-defaults chart is extended in the same way: the default PolicyServer template now passes webhookPort, readinessProbePort, and metricsPort through to the CRD when set, and its values.yaml documents the fields with commented-out examples and their defaults. Both charts include JSON schema additions and Helm unit tests covering the new values, hostNetwork toggling, anti-affinity injection, and service port rendering. Assisted-by: Github Copilot Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com>
9c35298 to
6089afc
Compare
| // mergeAffinityWithHostNetworkAntiAffinity builds the effective Affinity for a | ||
| // PolicyServer Deployment when hostNetwork is enabled. It injects targeted | ||
| // podAntiAffinity rules to prevent host-port conflicts: | ||
| // | ||
| // - A required rule always prevents replicas of the same PolicyServer from | ||
| // landing on the same node (matched by kubewarden/policy-server=<name>). | ||
| // - A required rule prevents co-location with the controller pod only when | ||
| // the PolicyServer's effective ports overlap with the controller's ports. | ||
| // - A required rule prevents co-location with each other PolicyServer whose | ||
| // effective ports overlap with this one. | ||
| // | ||
| // The user-supplied podAntiAffinity (from spec.affinity) takes full precedence: | ||
| // if it is non-nil it replaces all auto-generated rules. Other affinity sections | ||
| // (podAffinity, nodeAffinity) are carried over unchanged so they remain additive. | ||
| func mergeAffinityWithHostNetworkAntiAffinity( | ||
| userAffinity *corev1.Affinity, | ||
| policyServerName string, | ||
| psPorts []int32, | ||
| controllerPorts []int32, | ||
| conflictingPolicyServerNames []string, | ||
| ) *corev1.Affinity { | ||
| // Always prevent same-PS replicas from landing on the same node. | ||
| antiAffinityTerms := []corev1.PodAffinityTerm{ | ||
| { | ||
| LabelSelector: &metav1.LabelSelector{ | ||
| MatchLabels: map[string]string{ | ||
| constants.PolicyServerLabelKey: policyServerName, | ||
| }, | ||
| }, | ||
| TopologyKey: "kubernetes.io/hostname", | ||
| }, | ||
| } | ||
|
|
||
| // Prevent co-location with the controller when ports overlap. | ||
| if portsOverlap(psPorts, controllerPorts) { | ||
| antiAffinityTerms = append(antiAffinityTerms, corev1.PodAffinityTerm{ | ||
| LabelSelector: &metav1.LabelSelector{ | ||
| MatchLabels: map[string]string{ | ||
| constants.ComponentLabelKey: "controller", | ||
| constants.PartOfLabelKey: constants.PartOfLabelValue, | ||
| }, | ||
| }, | ||
| TopologyKey: "kubernetes.io/hostname", | ||
| }) | ||
| } | ||
|
|
||
| // Prevent co-location with each conflicting PolicyServer. | ||
| for _, name := range conflictingPolicyServerNames { | ||
| antiAffinityTerms = append(antiAffinityTerms, corev1.PodAffinityTerm{ | ||
| LabelSelector: &metav1.LabelSelector{ | ||
| MatchLabels: map[string]string{ | ||
| constants.PolicyServerLabelKey: name, | ||
| }, | ||
| }, | ||
| TopologyKey: "kubernetes.io/hostname", | ||
| }) | ||
| } | ||
|
|
||
| effective := &corev1.Affinity{ | ||
| PodAntiAffinity: &corev1.PodAntiAffinity{ | ||
| RequiredDuringSchedulingIgnoredDuringExecution: antiAffinityTerms, | ||
| }, | ||
| } | ||
|
|
||
| if userAffinity == nil { | ||
| return effective | ||
| } | ||
|
|
||
| if userAffinity.PodAntiAffinity != nil { | ||
| effective.PodAntiAffinity = userAffinity.PodAntiAffinity | ||
| } | ||
| if userAffinity.PodAffinity != nil { | ||
| effective.PodAffinity = userAffinity.PodAffinity | ||
| } | ||
| if userAffinity.NodeAffinity != nil { | ||
| effective.NodeAffinity = userAffinity.NodeAffinity | ||
| } | ||
|
|
||
| return effective | ||
| } |
There was a problem hiding this comment.
I've decided to change the controller to do this instead of request users review all their policy servers because I believe is a better UX for them. Furthermore, it will simplify the upgrade as well. Let me know what you think about it.
|
@kubewarden/kubewarden-developers I'll fix the e2e test asap. But I would like to ask you for the first round of review to get you feedback about the controller changes that I did to make the user UX better. Please, read the description to get the context of the changes. |
There was a problem hiding this comment.
Pull request overview
Enables running Kubewarden components in hostNetwork mode and adds configurable ports to avoid node-level port collisions, with scheduling safeguards via injected pod anti-affinity.
Changes:
- Add controller/PolicyServer hostNetwork support (including
dnsPolicy: ClusterFirstWithHostNet) and injected required podAntiAffinity rules to prevent host-port collisions. - Add optional
webhookPort,readinessProbePort,metricsPortfields to the PolicyServer API/CRD plus Helm values/templates to configure them. - Extend Helm charts and tests to support hostNetwork/port customization; add webhook validation for port conflicts.
Reviewed changes
Copilot reviewed 23 out of 23 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| internal/controller/policyserver_controller_test.go | Extends controller tests to assert default (non-hostNetwork) pod spec fields. |
| internal/controller/policyserver_controller_service.go | Switches Service target ports to PolicyServer “effective” ports; removes env-var-based metrics port override. |
| internal/controller/policyserver_controller_deployment.go | Adds hostNetwork pod spec handling, anti-affinity injection, conflict computation, and uses effective ports for webhook/readiness env/probes. |
| internal/controller/policyserver_controller.go | Adds reconciler fields for hostNetwork and controller port values used in conflict detection. |
| cmd/controller/main.go | Adds --host-network and --webhook-server-port, parses/validates controller ports for conflict detection, wires webhook server port into manager options. |
| charts/kubewarden-defaults/values.yaml | Documents new PolicyServer port override values. |
| charts/kubewarden-defaults/values.schema.json | Adds JSON-schema validation for PolicyServer port override values. |
| charts/kubewarden-defaults/tests/host_network_test.yaml | Adds helm-unittest coverage for default/overridden PolicyServer port fields in rendered CRs. |
| charts/kubewarden-defaults/templates/policyserver-default.yaml | Conditionally renders PolicyServer webhookPort/readinessProbePort/metricsPort fields. |
| charts/kubewarden-crds/templates/policies.kubewarden.io_policyservers.yaml | Adds CRD schema for the new PolicyServer port fields. |
| charts/kubewarden-controller/values.yaml | Adds controller hostNetwork and configurable ports values. |
| charts/kubewarden-controller/values.schema.json | Adds schema validation for hostNetwork and controller ports. |
| charts/kubewarden-controller/tests/service_ports_test.yaml | Adds helm-unittest coverage for service targetPort wiring from ports.*. |
| charts/kubewarden-controller/tests/host_network_test.yaml | Adds helm-unittest coverage for controller hostNetwork/dnsPolicy/anti-affinity and custom ports args/ports. |
| charts/kubewarden-controller/templates/service.yaml | Uses configurable controller ports for service targetPorts. |
| charts/kubewarden-controller/templates/deployment.yaml | Adds hostNetwork/dnsPolicy support; passes configurable ports via args and uses effective affinity helper. |
| charts/kubewarden-controller/templates/_helpers.tpl | Adds helper to inject required controller podAntiAffinity when hostNetwork is enabled. |
| api/policies/v1/zz_generated.deepcopy.go | Regenerates deepcopy to include new PolicyServer builder/spec port fields. |
| api/policies/v1/policyserver_webhook_test.go | Adds unit tests for port-conflict validation. |
| api/policies/v1/policyserver_webhook.go | Adds validatePorts call to PolicyServer validation webhook. |
| api/policies/v1/policyserver_types.go | Adds new port fields and “effective port” helpers on PolicyServer. |
| api/policies/v1/factories.go | Extends PolicyServer test factory to set new port fields. |
| Makefile | Adds a test-all target aggregating unit, helm-unittest, and e2e tests. |
Comments suppressed due to low confidence (1)
internal/controller/policyserver_controller_test.go:201
- There’s no controller-level test coverage for the new port fields and hostNetwork behavior. In particular, a test that sets
spec.webhookPort/spec.readinessProbePort/spec.metricsPortshould assert the Deployment and Service use the effective ports (and that hostNetwork triggers the expected dnsPolicy/anti-affinity injection). This would also catch regressions where a configurable port is applied to the Service but not to the container process.
By("checking the deployment spec")
Expect(deployment.Spec.Template.Spec).To(MatchFields(IgnoreExtras, Fields{
"Tolerations": BeEmpty(),
"HostNetwork": BeFalse(),
"DNSPolicy": Not(Equal(corev1.DNSClusterFirstWithHostNet)),
"SecurityContext": PointTo(MatchFields(IgnoreExtras, Fields{
"SELinuxOptions": BeNil(),
"WindowsOptions": BeNil(),
"RunAsUser": BeNil(),
"RunAsGroup": BeNil(),
"RunAsNonRoot": BeNil(),
"SupplementalGroups": BeNil(),
"FSGroup": BeNil(),
"Sysctls": BeNil(),
"FSGroupChangePolicy": BeNil(),
"SeccompProfile": PointTo(MatchAllFields(Fields{
"Type": Equal(corev1.SeccompProfileTypeRuntimeDefault),
"LocalhostProfile": BeNil(),
})),
})),
"Affinity": PointTo(MatchAllFields(Fields{
"NodeAffinity": BeNil(),
"PodAffinity": BeNil(),
"PodAntiAffinity": BeNil(),
})),
}))
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if r.MetricsEnabled { | ||
| svc.Spec.Ports = append( | ||
| svc.Spec.Ports, | ||
| corev1.ServicePort{ | ||
| Name: "metrics", | ||
| Port: getMetricsPort(), | ||
| Port: policyServer.EffectiveMetricsPort(), | ||
| Protocol: corev1.ProtocolTCP, | ||
| }, | ||
| ) |
There was a problem hiding this comment.
metricsPort is now configurable via policyServer.EffectiveMetricsPort(), but the Service does not set targetPort for the metrics entry. Because targetPort defaults to port, any non-default metricsPort will route to that same port on the pod, even though the PolicyServer Deployment currently doesn't configure the policy-server container to listen on the effective metrics port. This will break metrics scraping (and makes hostNetwork conflict detection inaccurate). Set an explicit targetPort and ensure the Deployment configures the policy-server metrics listener to bind to the same effective port.
| Value: strconv.Itoa(constants.PolicyServerReadinessProbePort), | ||
| Value: strconv.Itoa(int(policyServer.EffectiveReadinessProbePort())), | ||
| }, | ||
| { |
There was a problem hiding this comment.
The Deployment container env sets webhook and readiness ports, but there is no configuration passed to the policy-server container for the (now configurable) metrics port. If a user sets spec.metricsPort, the controller will update the Service/conflict detection to use that port, but the container will still expose metrics on its built-in default (8080), causing mismatched routing and ineffective hostNetwork anti-affinity. Add wiring so the policy-server process actually binds its metrics endpoint to policyServer.EffectiveMetricsPort() (for example via an existing supported env var/arg).
| { | |
| { | |
| Name: "KUBEWARDEN_METRICS_PORT", | |
| Value: strconv.Itoa(int(policyServer.EffectiveMetricsPort())), | |
| }, | |
| { |
| import ( | ||
| "context" | ||
| "errors" | ||
| "fmt" | ||
| "os" | ||
| "strconv" | ||
|
|
||
| corev1 "k8s.io/api/core/v1" | ||
| metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" | ||
| "k8s.io/apimachinery/pkg/util/intstr" | ||
| "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" | ||
|
|
||
| policiesv1 "github.com/kubewarden/kubewarden-controller/api/policies/v1" | ||
| "github.com/kubewarden/kubewarden-controller/internal/constants" | ||
| ) |
There was a problem hiding this comment.
This change removes support for overriding the PolicyServer metrics Service port via the KUBEWARDEN_POLICY_SERVER_SERVICES_METRICS_PORT environment variable. The controller Helm chart still sets that env var in some telemetry configurations; after this change it becomes a no-op. Update the chart/docs accordingly (or keep backwards-compat support if that knob is still expected to work).
There was a problem hiding this comment.
JFYI: I'm aware of that. This is one of my TODO list item
| config.ControllerHealthProbePort = parsePortFromAddress(mgrOpts.ProbeAddr) | ||
| config.ControllerMetricsPort = parsePortFromAddress(mgrOpts.MetricsAddr) | ||
| ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts))) | ||
|
|
||
| if config.ControllerHealthProbePort == 0 { | ||
| setupLog.Error(fmt.Errorf("cannot extract port from %q", mgrOpts.ProbeAddr), "invalid --health-probe-bind-address") | ||
| retcode = 1 | ||
| return | ||
| } | ||
| if config.ControllerMetricsPort == 0 { | ||
| setupLog.Error(fmt.Errorf("cannot extract port from %q", mgrOpts.MetricsAddr), "invalid --metrics-bind-address") | ||
| retcode = 1 | ||
| return | ||
| } |
There was a problem hiding this comment.
The new parsePortFromAddress validation rejects --metrics-bind-address / --health-probe-bind-address values that controller-runtime commonly accepts (e.g. using "0" to disable a listener, or other non host:port forms). This is a behavior change that can prevent users from disabling metrics/probes or from using previously-valid bind addresses. Consider explicitly allowing the documented disable value (and/or keeping validation conditional on --host-network), and treating a disabled listener as "no port" for conflict detection.
| # Important when hostNetwork is enabled or when running multiple PolicyServers | ||
| # on the same node, as they must not conflict. |
There was a problem hiding this comment.
The comment suggests port conflicts are also important "when running multiple PolicyServers on the same node". With pod networking (hostNetwork disabled), multiple pods can bind the same container ports on the same node without conflicts; port collisions only happen when hostNetwork (or hostPorts) are used. Consider rewording to avoid implying conflicts in the default (pod-network) case.
| # Important when hostNetwork is enabled or when running multiple PolicyServers | |
| # on the same node, as they must not conflict. | |
| # Important when using hostNetwork or hostPorts, because the selected | |
| # ports are exposed on the node and must not conflict. |
viccuad
left a comment
There was a problem hiding this comment.
Three new optional fields (webhookPort, readinessProbePort, metricsPort) are added to the PolicyServer CRD
I would prefer to have env vars and keep CRD changes to the minimum, as they are more difficult to maintain. In addition, we expect only a small percentage of users to use this feature, hence we can keep the CRDs simpler for other, more generalist features.
|
@kubewarden/kubewarden-developers As I said during the daily, while I was testing this change, I start to think that all this logic to inject the anti-affinity rules are too much. I don't think the controller should do that. I expect that users using host network are aware of the additional configuration to avoid port conflicts. Therefore, I'll change the approach and rely on user configuration to ensure that the ports will not conflict. Because of that, I'm moving this to block while I prepare the simplify version of this feature. |
|
Closing in favor of #1682 |
Description
This PR enables hostNetwork support for Kubewarden. When the hostNetwork toggle is enabled in the Helm chart, both the controller and PolicyServer pods bind directly to the host's network interfaces instead of using the pod network. Three new optional fields (webhookPort, readinessProbePort, metricsPort) are added to the PolicyServer CRD. The controller Helm chart gains a corresponding ports block for its own webhook, health probe, and metrics endpoints.
Configurable ports are required because with hostNetwork enabled, every port is exposed on the node's IP address rather than on an isolated pod IP. If another workload on the cluster already occupies a port on that host, the pod will fail to start. By allowing users to customize each port, they can resolve conflicts with other host-network workloads running in their cluster that Kubewarden has no control over.
Automatic anti-affinity injection is necessary because configurable ports alone cannot prevent Kubewarden's own components from colliding with each other. Two replicas of the same PolicyServer use identical ports, so they must never land on the same node. The controller reconciler computes three layers of required podAntiAffinity rules when hostNetwork is active: same-PolicyServer replicas are forbidden from co-locating, the controller and PolicyServer pods are separated when their ports overlap, and different PolicyServer instances are kept apart when they share any port value. These rules use requiredDuringSchedulingIgnoredDuringExecution to guarantee the scheduler will never place conflicting pods together. If a user provides their own affinity configuration on the PolicyServer spec, it takes full precedence.
Fix #1597