Add Helm chart for kubernetes-nmstate operator #1456
Add Helm chart for kubernetes-nmstate operator #1456sanjay7178 wants to merge 11 commits intonmstate:mainfrom
Conversation
…CRDs, roles, service accounts, and updated documentation examples and node setup scripts. Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>
|
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
Hi @sanjay7178. Thanks for your PR. PRs from untrusted users cannot be marked as trusted with I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the deployability of the kubernetes-nmstate operator by introducing a comprehensive Helm chart. This chart streamlines the installation process, allowing users to easily deploy and configure the operator and its associated components, including CRDs, through standard Helm practices. The addition of a Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>
There was a problem hiding this comment.
Code Review
This pull request introduces a Helm chart for deploying the kubernetes-nmstate operator, which is a great addition for simplifying deployment. The overall structure of the chart is good and follows many Helm best practices. However, I've identified a few critical issues that need to be addressed before this can be merged. The most significant issue is with how Custom Resource Definitions (CRDs) are included, which will prevent the chart from being packaged or installed. Additionally, there are some security concerns related to overly permissive RBAC roles that should be tightened. I've provided specific comments and suggestions for each of these points in the review.
| rules: | ||
| - apiGroups: | ||
| - "" | ||
| resources: | ||
| - configmaps | ||
| - endpoints | ||
| - events | ||
| - namespaces | ||
| - persistentvolumeclaims | ||
| - pods | ||
| - secrets | ||
| - serviceaccounts | ||
| - services | ||
| verbs: | ||
| - '*' | ||
| - apiGroups: | ||
| - "" | ||
| resources: | ||
| - nodes | ||
| verbs: | ||
| - get | ||
| - list | ||
| - apiGroups: | ||
| - admissionregistration.k8s.io | ||
| resources: | ||
| - mutatingwebhookconfigurations | ||
| verbs: | ||
| - '*' | ||
| - apiGroups: | ||
| - apiextensions.k8s.io | ||
| - nmstate.io | ||
| resources: | ||
| - '*' | ||
| verbs: | ||
| - '*' | ||
| - apiGroups: | ||
| - apps | ||
| resources: | ||
| - daemonsets | ||
| - deployments | ||
| - replicasets | ||
| - statefulsets | ||
| verbs: | ||
| - '*' | ||
| - apiGroups: | ||
| - console.openshift.io | ||
| resources: | ||
| - consoleplugins | ||
| verbs: | ||
| - '*' | ||
| - apiGroups: | ||
| - monitoring.coreos.com | ||
| resources: | ||
| - servicemonitors | ||
| verbs: | ||
| - create | ||
| - get | ||
| - list | ||
| - patch | ||
| - update | ||
| - watch | ||
| - apiGroups: | ||
| - networking.k8s.io | ||
| resources: | ||
| - networkpolicies | ||
| verbs: | ||
| - '*' | ||
| - apiGroups: | ||
| - operator.openshift.io | ||
| resources: | ||
| - consoles | ||
| verbs: | ||
| - get | ||
| - list | ||
| - update | ||
| - watch | ||
| - apiGroups: | ||
| - rbac.authorization.k8s.io | ||
| resources: | ||
| - clusterrolebindings | ||
| - clusterroles | ||
| - rolebindings | ||
| - roles | ||
| verbs: | ||
| - '*' |
There was a problem hiding this comment.
The ClusterRole grants overly permissive access by using wildcards (*) for verbs on many resources. This violates the principle of least privilege and introduces a security risk. The roles should only grant the specific permissions required by the operator.
Additionally, this ClusterRole appears to be missing permissions for validatingadmissionpolicies and validatingadmissionpolicybindings that are present in the source deploy/operator/role.yaml. This discrepancy could lead to runtime errors.
Please replace the wildcards with a specific list of required verbs for each resource.
| pod-security.kubernetes.io/enforce: privileged | ||
| pod-security.kubernetes.io/audit: privileged | ||
| pod-security.kubernetes.io/warn: privileged |
There was a problem hiding this comment.
Applying pod-security.kubernetes.io/enforce: privileged at the namespace level allows any pod in the namespace to run as privileged. This includes the operator deployment, which is designed to be unprivileged. While the handler daemonset likely requires privileged access, a more secure approach would be to apply Pod Security Standards at the pod level for the handler and use a more restrictive profile (like baseline) for the namespace by default. Since the handler is deployed by the operator, this might require changes in the operator logic to set the appropriate securityContext on the handler pods it creates. If that's not feasible, this is an acceptable compromise, but it's a security risk worth noting.
| rules: | ||
| - apiGroups: | ||
| - apps | ||
| resources: | ||
| - daemonsets | ||
| - deployments | ||
| - replicasets | ||
| - statefulsets | ||
| verbs: | ||
| - '*' | ||
| - apiGroups: | ||
| - policy | ||
| resources: | ||
| - poddisruptionbudgets | ||
| verbs: | ||
| - '*' |
Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>
…ource Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>
Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>
Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces a Helm chart for the kubernetes-nmstate operator. The chart is well-structured and follows many Helm best practices, including configurable values, helper templates, and conditional resource creation. However, I've found a critical issue where a required CRD file is missing, which will cause installation to fail. I've also identified several areas for improvement, such as removing redundant RBAC permissions and pinning image versions to ensure deterministic deployments. Please see my detailed comments for suggestions.
Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>
|
/gemini review |
Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>
There was a problem hiding this comment.
Code Review
This pull request introduces a Helm chart for deploying the kubernetes-nmstate operator, which is a significant enhancement for users who prefer Helm for managing applications. The chart is well-structured, following common Helm best practices like using helper templates, providing a NOTES.txt, and making CRD installation optional. My review focuses on improving security by adhering to the principle of least privilege in RBAC resources, and enhancing maintainability and correctness in the chart's values and test configurations. I've suggested changes to pin image versions, correct the typing of values for environment variables, and restrict overly permissive RBAC rules.
| verbs: | ||
| - '*' |
There was a problem hiding this comment.
The use of a wildcard * for verbs grants excessive permissions to the operator. According to the principle of least privilege, permissions should be as restrictive as possible. Please replace the wildcard with a specific list of required verbs (e.g., get, list, watch, create, update, patch, delete). This should be applied to all rules in this ClusterRole that use a wildcard.
verbs:
- create
- delete
- get
- list
- patch
- update
- watch| value: {{ .Values.operator.env.enableProfiler | quote }} | ||
| - name: PROFILER_PORT | ||
| value: {{ .Values.operator.env.profilerPort | quote }} |
There was a problem hiding this comment.
The values for enableProfiler and profilerPort are defined as strings in values.yaml. However, the deployment template uses the quote function, which will result in double-quoted values in the final YAML (e.g., value: '"False"'). This is likely incorrect. To ensure the correct values are passed to the container, these should be typed as a boolean and a number in values.yaml.
| verbs: | ||
| - '*' |
There was a problem hiding this comment.
| restartPolicy: Never | ||
| containers: | ||
| - name: check-operator-running | ||
| image: bitnami/kubectl:latest |
| # IMAGE_REGISTRY / IMAGE_REPO / OPERATOR_IMAGE_NAME / OPERATOR_IMAGE_TAG | ||
| registry: quay.io | ||
| repository: nmstate/kubernetes-nmstate-operator | ||
| tag: latest |
There was a problem hiding this comment.
Using the :latest tag for default image versions is discouraged as it makes deployments non-deterministic and can cause unexpected upgrades. It's better to tie the default image tag to the chart's appVersion. Please remove the tag: latest line for both the operator (line 10) and handler (line 54) images. This will cause the image tags to default to the appVersion from Chart.yaml, as configured in _helpers.tpl.
Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>
Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>
mkowalski
left a comment
There was a problem hiding this comment.
I need to see some additional things before this can fly, but the general idea is good
- There is no mechanism to keep deployment yamls in sync. If I now go and modify
deploy/handler/operator.yaml(what is the standard workflow in the project), there is nothing that will ensure the change also made it tocharts/kubernetes-nmstate/templates/deployment.yamlintroduced by this PR.
This one is a big no. We have manifests which are source of truth (e.g. deploy/handler/operator.yaml) and if there is some copy-paste to another place, it has to go through some orchestration and not manual.
- I need to see some at least basic verification that helm chart in the repo is correct. If I now go and delete for example the file
charts/kubernetes-nmstate/charts/crds/crds/nmstate.io_nmstates.yaml, helm chart will be very incorrect but there is nothing here that will catch this.
Look at what we do for OLM manifests. We have a Makefile target which is responsible for trying to build a bundle from what's in the repository and if that fails, it means someone tries to merge invalid change. Something similar in needed here.
- Once those are solved, we will need to think about the mechanism of pushing the chart itself to some upstream repo. It's good that we can have it in-repo but it does not help people to consume the project. For OLM we have GitHub Action which can push the bundle to the OperatorHub. For Helm similar approach may be used.
At the current stage it looks like an one-off but not something production ready.
|
currently we could have some symlink for CRD's (eg : But symlink kinda poses risks https://helm.sh/blog/2019/10/30/helm-symlink-security-notice/ , so we could have like vice versa ( A make target (like we could make a Also we could have a GitHub Action using chart-releaser to publish to GitHub Pages / ArtifactHub . |
Is this a BUG FIX or a FEATURE ?:
/kind enhancement
What this PR does / why we need it:
This PR adds support for helm chart for kubernetes-nmstate operator . Helm would help in deploying, updating, and managing the current operator .
Currently tis helm chart can be published to artifacthub.io , for seamless usage .
Special notes for your reviewer:
This Issue #1410 proposes to use similar helm configuration of metallb repository . So CRDs would live in
charts/crds/crds/, Helm's specialcrds/directory guarantees they are installed and registered before any templates run, avoiding the NMState CR race condition on fresh installs. CRDs are skippable via--set crds.enabled=falsefor GitOps / cluster-admin managed environments . NMState CR by default are deploys the handler DaemonSet on all nodes automatically.Release note:
Closes: #1410