chore(charts): controller-gen output file into charts.#1520
chore(charts): controller-gen output file into charts.#1520jvanz wants to merge 11 commits intokubewarden:mainfrom
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #1520 +/- ##
==========================================
- Coverage 80.02% 75.01% -5.01%
==========================================
Files 127 171 +44
Lines 16573 21075 +4502
==========================================
+ Hits 13262 15810 +2548
- Misses 3311 5051 +1740
- Partials 0 214 +214
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
91c845d to
4e819cd
Compare
charts/kubewarden-controller/templates/controller-rbac-extras.yaml
Outdated
Show resolved
Hide resolved
|
Also, I got a bit confused by the names:
I would like, if possible, to end up with something like sbomscanner:
In our case we would have 4 files:
|
I'm fine splitting role and rolebindings. I actually did that while I was working on this PR. But I've changed that because I would like to avoid two role files. The |
| - apiGroups: | ||
| - admissionregistration.k8s.io | ||
| resources: | ||
| - mutatingwebhookconfigurations | ||
| - validatingwebhookconfigurations | ||
| verbs: | ||
| - create | ||
| - delete | ||
| - list | ||
| - patch | ||
| - watch | ||
| - apiGroups: |
There was a problem hiding this comment.
Good catch from copilot:
We have to update the annotations.
We also have to understand why all the tests are green despite this change
| - apiGroups: | ||
| - admissionregistration.k8s.io | ||
| resources: | ||
| - mutatingwebhookconfigurations | ||
| - validatingwebhookconfigurations | ||
| verbs: | ||
| - create | ||
| - delete | ||
| - list | ||
| - patch | ||
| - watch | ||
| - apiGroups: |
There was a problem hiding this comment.
I've tasked an AI agent to find why the tests are green. This is the answer I got, which seems reasonable to me:
Unit Tests — system:masters Bypasses All RBAC
envtest starts a real kube-apiserver with --authorization-mode=RBAC, so RBAC is technically enabled. However, controller-runtime's testEnv.Start() unconditionally returns a restConfig authenticated as a member of the system:masters group (envtest/server.go):
adminInfo := User{Name: "admin", Groups: []string{"system:masters"}}
system:masters is a hardcoded Kubernetes superuser group that bypasses the RBAC authorizer entirely — not via a ClusterRoleBinding, but at the authorizer level itself. No RBAC policy, no matter how restrictive, is evaluated for this identity. The manager and all reconcilers inherit these credentials, so every r.Get(...) call silently succeeds regardless of what the RBAC manifest says.
Additionally, policy_subreconciler_webhook_test.go doesn't test the reconcile methods at all — it only tests the namespaceSelector() helper in isolation with no client involved, so even in principle there's no API call to be blocked.
E2E Tests — Test Harness Also Runs as system:masters
The e2e tests use a Kind cluster provisioned by sigs.k8s.io/e2e-framework. Kind's generated kubeconfig identifies the user as kubernetes-admin, which is also a member of system:masters. The entire test harness — namespace creation, Helm installs, assertion calls — runs under this identity, bypassing RBAC.
The controller's ServiceAccount does get the restricted Helm chart ClusterRole applied (the one missing get), but the tests only observe effects (e.g., "does a ValidatingWebhookConfiguration with this name now exist?"), not whether the controller reconciled without errors. They don't:
- Inspect the controller's logs for 403 errors
- Check the policy's .status conditions for reconcile failures
- Impersonate the controller's ServiceAccount to validate its permissions directly
I do agree we have to extend the e2e tests, but I do not agree with the remediation paths being proposed. I think we should understand how the GET operation is used, and check for changes that depend on that. I would not grep into the logs
This commits splits the rbac.yaml into multiple files. This files contains all the Roles and ClusterRoles definitions as well as the bindings required by the controller and audit-scanner. This dificults the tasks of automatically generate the manifest using controller-gen. Therefore, this commits split the file into 3 files. One to store all the RBAC configuration used by audit-scanner. The other two are used to keep the controller configuration. `controller-rbac-roles` has the controller roles and `controller-rbac-bindings` stores the binding of the roles. This facilitate a future change where the roles will be automatically generate by controller-gen. Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com>
Updates the controller-gen command used to generated the RBAC manifests into the charts/kubewarden-controller directory. This commit also adds the kubebuilder markers to add the missing permissions in the controller roles that was added using manully created roles definitions. Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com>
Add scripts/post-generate-manifests.sh to post-process controller-gen output. It renames CRD files from the default policies.kubewarden.io_RESOURCE.yaml format to RESOURCE.yaml directly into charts/kubewarden-crds/templates/. Also it injects Helm template labels and annotations into each resource in controller-rbac-roles.yaml to match the conventions used by the rest of the chart. Update the Makefile manifests target to output CRDs directly into charts/kubewarden-crds/templates/ and invoke the post-processing script automatically. Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com>
Add charts/kubewarden-controller/tests/rbac_roles_test.yaml to verify that the controller-gen-generated controller-rbac-roles.yaml contains the expected Helm labels and annotations on both the ClusterRole and Role resources, and that additionalLabels and additionalAnnotations values are propagated correctly. Add charts/kubewarden-crds/tests/crds_test.yaml to verify that each of the five Kubewarden CRD files renders as a CustomResourceDefinition with the correct metadata.name and spec.group. Update the helm-unittest Makefile target to include the kubewarden-crds chart. Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com>
This commit split the audit-scanner-rbac.yaml file with all the audit scanner RABC configuration into two files: one for the roles and another to the role bindings. Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com>
78bf871 to
930f204
Compare
| }) | ||
| Expect(err).NotTo(HaveOccurred()) | ||
|
|
||
| // Create RBAC resources that mirror the production Helm chart permissions . |
There was a problem hiding this comment.
Here we are hardcoding RBAC, instead of making use of our RBAC files.
Can we instead of defining the structs by hand (why may drift in time), read from file, do a decoder.Decode() into an Unstructured struct, and then apply with the k8sClient.Create(ctx,obj)?
There was a problem hiding this comment.
In my first try I do load the resources from Helm chart files. But that require removing templates calls from files or rendering the files first. If we need to do that I would add e2e test instead of the ginkgo ones. But those are slower as well. Thus, I think hardcode the role here can be a good balance between speed and convenience to detect issue early in the development process.
There was a problem hiding this comment.
Argh, true, I forgot that our output is templated.
I'm ok not doing this, but maybe we should track is as tech debt, I'm scared on forgetting that the tests work as such and slipping regressions in the future.
There was a problem hiding this comment.
I'm concerned about these structs going out of sync with the actual RBAC managed by the helm chart.
Can't we leave things as they were and write a test inside of tests done with the e2e-framework? I don't care about them slow, I'm more concerned about having out of sync data.
There was a problem hiding this comment.
I'm convinced now on moving this test to <root>/e2e/. We can call a helm template which uses the default values, to be able to consume the templated YAML.
Rename controller-rbac-extras.yaml to controller-rbac-rolebinding.yaml and extract the metrics-reader ClusterRole into controller-rbac-metrics-role.yaml. This makes each file's purpose immediately clear from its name. Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com>
Fix a little typo in the PolicyServer CRD description. Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com>
The controller calls r.Get() on ValidatingWebhookConfiguration and MutatingWebhookConfiguration during reconciliation and deletion, but the kubebuilder RBAC markers were missing the 'get' verb. This would cause RBAC 'forbidden' errors at runtime. Add 'get' to the markers and regenerate the chart RBAC output. Assisted-by: Github Copilot Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com>
Helm 3 automatically deletes resources that were part of a previous release but are no longer in the new chart templates. The manual kubectl delete instructions were therefore unnecessary. Assisted-by: Github Copilot Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com>
Adds a new CI step to verify if all the manifests file are update with the output of the controller-gen command. This ensure that if an user change the markers or the helm chart file the CI will detect the diverce and spot this before commit a invalid file. Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com> Assisted-by: Github Copilot Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com>
This commit changes the ginkgo tests to use a more restricted RBAC role in the client used by the manager where the reconcilers run. Therefore, we ensure that the credential in use is not the default one. Which give total access to the resources in the testing environment. Now the tests have two clients in use. One to perform the assertation wich still uses the system:master credententials that have full access to all resources. And another one to be used by the reconcilers which authenticates with a credentials with limited access that should replicate the credentials available in production environment. Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com> Assisted-by: Github Copilot Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com>
jvanz
left a comment
There was a problem hiding this comment.
@kubewarden/kubewarden-developers I've added again the metric-reader role. While I was updating the telemetry documentation removing the references for Nginx ingress I realized that could be used by the sidecar to give access metrics endpoint via kube-rbac-proxy. Therefore, if we remove it, it may break the integration.
I'll test a little bit more telemetry with this changes.
As far as I can see, the metrics-reader role can be used by operators to give service accounts the credentials to read the metrics from the controller. The binding is missing by design. Just to ensure that we do not break the upgrade for any of our users, if we want to remove the role, I would mark it as deprecated and remove it in a later release. |
viccuad
left a comment
There was a problem hiding this comment.
Approving! Looking forward to this landing, great work.
I would like to track the tech-debt on the hardcoding of the RBACs on the tests.
| }) | ||
| Expect(err).NotTo(HaveOccurred()) | ||
|
|
||
| // Create RBAC resources that mirror the production Helm chart permissions . |
There was a problem hiding this comment.
I'm convinced now on moving this test to <root>/e2e/. We can call a helm template which uses the default values, to be able to consume the templated YAML.
Description
Updates the controller-gen command used to generated the RBAC manifests into the charts/kubewarden-controller directory. This commit also adds the kubebuilder markers to add the missing permissions in the controller roles that was added using manully created roles definitions.
Fix #1483