Add support for optional node selector labels for kube-ovn OVS/OVN pods#5803
Add support for optional node selector labels for kube-ovn OVS/OVN pods#5803saeed-mcu wants to merge 1 commit intokubeovn:masterfrom
Conversation
Summary of ChangesHello @saeed-mcu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new Helm configuration option, Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds a new Helm value ovsNodesLabels to allow specifying node selector labels for OVS/OVN pods. While the feature is useful, the implementation has a few critical issues. The default value for ovsNodesLabels in values.yaml is an empty list ([]), which will cause Helm to generate invalid Kubernetes manifests. Additionally, the way the labels are rendered in the DaemonSet templates is not robust and will also fail when the value is not provided or is an empty map. I've provided suggestions to fix these issues by changing the default value to an empty map ({}), updating the misleading comments, and using a with block in the templates for safer rendering.
| {{- end }} | ||
| nodeSelector: | ||
| kubernetes.io/os: "linux" | ||
| {{ .Values.ovsNodesLabels | toYaml | nindent 8 }} |
There was a problem hiding this comment.
The current implementation for adding ovsNodesLabels is not robust and will generate invalid YAML, causing deployment failure. If ovsNodesLabels is not provided, it will be nil, and toYaml will render null, which is invalid in this context. Using a with block ensures that labels are only rendered if the ovsNodesLabels map is not empty, preventing this error.
{{- with .Values.ovsNodesLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}| {{- end }} | ||
| nodeSelector: | ||
| kubernetes.io/os: "linux" | ||
| {{ .Values.ovsNodesLabels | toYaml | nindent 8 }} |
There was a problem hiding this comment.
The current implementation for adding ovsNodesLabels is not robust and will generate invalid YAML, causing deployment failure. If ovsNodesLabels is not provided, it will be nil, and toYaml will render null, which is invalid in this context. Using a with block ensures that labels are only rendered if the ovsNodesLabels map is not empty, preventing this error.
{{- with .Values.ovsNodesLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}| periodSeconds: 10 | ||
| nodeSelector: | ||
| kubernetes.io/os: "linux" | ||
| {{ .Values.ovsNodesLabels | toYaml | nindent 8 }} |
There was a problem hiding this comment.
The current implementation for adding ovsNodesLabels is not robust and will generate invalid YAML, causing deployment failure. If ovsNodesLabels is not provided, it will be nil, and toYaml will render null, which is invalid in this context. Using a with block ensures that labels are only rendered if the ovsNodesLabels map is not empty, preventing this error.
{{- with .Values.ovsNodesLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
charts/kube-ovn-v2/values.yaml
Outdated
| # -- Label used to auto-identify ovs node. | ||
| # Any node that has any of these labels will be considered a ovs node. | ||
| # Note: This feature uses Helm "lookup" function, which is not compatible with tools such as ArgoCD. | ||
| # @section -- Global parameters | ||
| ovsNodesLabels: [] | ||
| # kube-ovn/role: ovs | ||
|
|
There was a problem hiding this comment.
The new ovsNodesLabels value has an incorrect default and a misleading comment, which will cause the chart deployment to fail.
- The default value is
[](an empty list), but it should be a map of labels. Using a list will cause template rendering to fail. The default should be{}(an empty map). - The comment is incorrect. It states that this feature uses Helm's
lookupfunction, which is not the case. The labels are directly added to thenodeSelector. The comment should be updated to accurately describe its purpose.
# -- Additional node selector labels for the OVS/OVN pods.
# This allows scheduling pods like ovs-ovn, kube-ovn-cni, and kube-ovn-pinger on specific nodes.
# See https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
# @section -- Global parameters
ovsNodesLabels: {}
# kube-ovn/role: ovs
This change introduces a new Helm value ovsNodesLabels that allows adding custom node selector labels to ovs-ovn pods. For example: ovsNodesLabels: kube-ovn/role: ovs This is useful in environments such as OpenStack-Helm, where operators may want to schedule OVS/OVN pods only on specific nodes (e.g., compute nodes) instead of deploying them cluster-wide. When ovsNodesLabels is empty, only the default selector kubernetes.io/os: linux is applied. Signed-off-by: Saeed Padari <sam137115@gmail.com>
This change introduces a new Helm value ovsNodesLabels that allows adding custom node selector labels to ovs-ovn pods. For example:
This is useful in environments such as OpenStack-Helm, where operators may want to schedule OVS/OVN pods only on specific nodes (e.g., compute nodes) instead of deploying them cluster-wide. When ovsNodesLabels is empty, only the default selector kubernetes.io/os: linux is applied.