Skip to content

Add support for optional node selector labels for kube-ovn OVS/OVN pods#5803

Closed
saeed-mcu wants to merge 1 commit intokubeovn:masterfrom
saeed-mcu:master
Closed

Add support for optional node selector labels for kube-ovn OVS/OVN pods#5803
saeed-mcu wants to merge 1 commit intokubeovn:masterfrom
saeed-mcu:master

Conversation

@saeed-mcu
Copy link
Copy Markdown

This change introduces a new Helm value ovsNodesLabels that allows adding custom node selector labels to ovs-ovn pods. For example:

 ovsNodesLabels:
  kube-ovn/role: ovs

This is useful in environments such as OpenStack-Helm, where operators may want to schedule OVS/OVN pods only on specific nodes (e.g., compute nodes) instead of deploying them cluster-wide. When ovsNodesLabels is empty, only the default selector kubernetes.io/os: linux is applied.

@dosubot dosubot bot added the size:S This PR changes 10-29 lines, ignoring generated files. label Oct 17, 2025
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @saeed-mcu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new Helm configuration option, ovsNodesLabels, which provides greater control over where Kube-OVN's OVS/OVN pods are scheduled within a Kubernetes cluster. This enhancement allows users to define custom node selector labels, facilitating more precise deployment strategies, especially in environments requiring specific pod placement on designated node types.

Highlights

  • Custom Node Selection: Introduces a new Helm value ovsNodesLabels to allow custom node selector labels for ovs-ovn pods, enhancing deployment flexibility.
  • Targeted Pod Scheduling: Enables operators to schedule OVS/OVN pods on specific nodes, which is particularly useful in environments like OpenStack-Helm for placing pods on designated compute nodes.
  • Default Behavior Maintained: When the ovsNodesLabels value is empty, the default kubernetes.io/os: linux selector is still applied, ensuring basic functionality.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@dosubot dosubot bot added the chart Helm Chart label Oct 17, 2025
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a new Helm value ovsNodesLabels to allow specifying node selector labels for OVS/OVN pods. While the feature is useful, the implementation has a few critical issues. The default value for ovsNodesLabels in values.yaml is an empty list ([]), which will cause Helm to generate invalid Kubernetes manifests. Additionally, the way the labels are rendered in the DaemonSet templates is not robust and will also fail when the value is not provided or is an empty map. I've provided suggestions to fix these issues by changing the default value to an empty map ({}), updating the misleading comments, and using a with block in the templates for safer rendering.

{{- end }}
nodeSelector:
kubernetes.io/os: "linux"
{{ .Values.ovsNodesLabels | toYaml | nindent 8 }}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The current implementation for adding ovsNodesLabels is not robust and will generate invalid YAML, causing deployment failure. If ovsNodesLabels is not provided, it will be nil, and toYaml will render null, which is invalid in this context. Using a with block ensures that labels are only rendered if the ovsNodesLabels map is not empty, preventing this error.

        {{- with .Values.ovsNodesLabels }}
{{- toYaml . | nindent 8 }}
        {{- end }}

{{- end }}
nodeSelector:
kubernetes.io/os: "linux"
{{ .Values.ovsNodesLabels | toYaml | nindent 8 }}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The current implementation for adding ovsNodesLabels is not robust and will generate invalid YAML, causing deployment failure. If ovsNodesLabels is not provided, it will be nil, and toYaml will render null, which is invalid in this context. Using a with block ensures that labels are only rendered if the ovsNodesLabels map is not empty, preventing this error.

        {{- with .Values.ovsNodesLabels }}
{{- toYaml . | nindent 8 }}
        {{- end }}

periodSeconds: 10
nodeSelector:
kubernetes.io/os: "linux"
{{ .Values.ovsNodesLabels | toYaml | nindent 8 }}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The current implementation for adding ovsNodesLabels is not robust and will generate invalid YAML, causing deployment failure. If ovsNodesLabels is not provided, it will be nil, and toYaml will render null, which is invalid in this context. Using a with block ensures that labels are only rendered if the ovsNodesLabels map is not empty, preventing this error.

        {{- with .Values.ovsNodesLabels }}
{{- toYaml . | nindent 8 }}
        {{- end }}

Comment on lines +50 to +56
# -- Label used to auto-identify ovs node.
# Any node that has any of these labels will be considered a ovs node.
# Note: This feature uses Helm "lookup" function, which is not compatible with tools such as ArgoCD.
# @section -- Global parameters
ovsNodesLabels: []
# kube-ovn/role: ovs

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The new ovsNodesLabels value has an incorrect default and a misleading comment, which will cause the chart deployment to fail.

  1. The default value is [] (an empty list), but it should be a map of labels. Using a list will cause template rendering to fail. The default should be {} (an empty map).
  2. The comment is incorrect. It states that this feature uses Helm's lookup function, which is not the case. The labels are directly added to the nodeSelector. The comment should be updated to accurately describe its purpose.
# -- Additional node selector labels for the OVS/OVN pods.
# This allows scheduling pods like ovs-ovn, kube-ovn-cni, and kube-ovn-pinger on specific nodes.
# See https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
# @section -- Global parameters
ovsNodesLabels: {}
  # kube-ovn/role: ovs

This change introduces a new Helm value ovsNodesLabels that allows
adding custom node selector labels to ovs-ovn pods. For example:
 ovsNodesLabels:
  kube-ovn/role: ovs
This is useful in environments such as OpenStack-Helm, where operators
may want to schedule OVS/OVN pods only on specific nodes (e.g., compute
nodes) instead of deploying them cluster-wide. When ovsNodesLabels is
empty, only the default selector kubernetes.io/os: linux is applied.

Signed-off-by: Saeed Padari <sam137115@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

chart Helm Chart size:S This PR changes 10-29 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant