Skip to content

fix: add retry logic to kubectl-ko getOvnCentralPod for leader election#6351

Merged
oilbeater merged 1 commit intomasterfrom
fix/kubectl-ko-leader-election-retry
Feb 26, 2026
Merged

fix: add retry logic to kubectl-ko getOvnCentralPod for leader election#6351
oilbeater merged 1 commit intomasterfrom
fix/kubectl-ko-leader-election-retry

Conversation

@oilbeater
Copy link
Copy Markdown
Collaborator

Summary

  • getOvnCentralPod() in kubectl-ko script crashed silently during OVN leader election transitions, causing kubectl ko trace and other subcommands to fail intermittently in e2e tests
  • Root cause: under set -euo pipefail, when no pod had the leader label (e.g. ovn-nb-leader=true), grep ovn-central returned exit code 1, causing the script to exit immediately — the error handling code (if [ -z "$NB_POD" ]) was unreachable dead code
  • Extract getLeaderPod() helper with retry logic (up to 10 attempts, 1s interval), set +o pipefail protection, and stderr suppression
  • Fix NORTHD_POD query to use $KUBE_OVN_NS instead of hardcoded kube-system

Test plan

  • Verify kubectl ko trace works correctly when leader labels are stable
  • Verify kubectl ko trace recovers during transient leader election (e.g., after kubectl delete pod of ovn-central leader)
  • Run kubectl-ko e2e test suite multiple times to confirm no flaky failures

🤖 Generated with Claude Code

getOvnCentralPod() would crash silently during OVN leader election
transitions. Under `set -euo pipefail`, when no pod had the leader
label (e.g. ovn-nb-leader=true), `grep ovn-central` in the pipeline
returned exit code 1, causing the script to exit immediately without
any error message. This made kubectl-ko trace and other subcommands
fail intermittently in e2e tests.

Extract a getLeaderPod() helper that retries up to 10 times with 1s
intervals, protecting the pipeline with `set +o pipefail` and
suppressing kubectl stderr noise. Also fix NORTHD_POD query to use
$KUBE_OVN_NS instead of hardcoded kube-system.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. bug Something isn't working labels Feb 26, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @oilbeater, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the robustness of the kubectl-ko script by addressing intermittent failures related to OVN leader election. It introduces a new helper function with retry logic to reliably identify leader pods and corrects a namespace hardcoding issue, ensuring the script functions correctly even during transient cluster states.

Highlights

  • Robustness Improvement: Implemented retry logic for OVN leader pod detection in kubectl-ko to prevent silent failures during leader election transitions.
  • Root Cause Addressed: Fixed an issue where grep failing with set -euo pipefail caused scripts to exit prematurely when no leader pod was found.
  • Code Refactoring: Introduced a new helper function getLeaderPod() to encapsulate leader detection logic, including retries, pipefail handling, and error suppression.
  • Namespace Correction: Updated the NORTHD_POD and image queries to correctly use the KUBE_OVN_NS variable instead of a hardcoded kube-system namespace.
Changelog
  • dist/images/kubectl-ko
    • Added a new getLeaderPod helper function that includes retry logic (10 attempts with 1-second intervals), handles pipefail gracefully, and suppresses stderr for kubectl commands.
    • Refactored getOvnCentralPod to utilize the new getLeaderPod function for retrieving OVN NB, SB, and Northd leader pods.
    • Corrected the namespace used for querying OVN_NORTHD_POD and the kube-ovn-cni image, changing it from a hardcoded kube-system to the dynamic $KUBE_OVN_NS.
Activity
  • The author outlined a test plan to verify the fix under stable and transient leader election scenarios, and by running the e2e test suite multiple times.
  • The pull request description was generated using Claude Code.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a robust retry mechanism for finding OVN leader pods in the kubectl-ko script, addressing intermittent failures during leader elections. The changes include a new getLeaderPod helper function that handles transient states gracefully. The hardcoded kube-system namespace has also been correctly replaced with the $KUBE_OVN_NS variable. My review includes a suggestion to improve the pod selection logic for better robustness.

local result=
for i in $(seq 1 10); do
set +o pipefail
result=$(kubectl get pod -n $KUBE_OVN_NS -l "$label"=true 2>/dev/null | grep ovn-central | awk '{if($2=="1/1" && $3=="Running") print $1}' | head -n 1)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using grep to filter pods by name can be brittle. It's better to use a more specific label selector with kubectl to ensure you're selecting the correct pods. The ovn-central pods have the app=ovn-central label, which can be used for more precise selection.

Suggested change
result=$(kubectl get pod -n $KUBE_OVN_NS -l "$label"=true 2>/dev/null | grep ovn-central | awk '{if($2=="1/1" && $3=="Running") print $1}' | head -n 1)
result=$(kubectl get pod -n $KUBE_OVN_NS -l "app=ovn-central,$label=true" 2>/dev/null | awk '{if($2=="1/1" && $3=="Running") print $1}' | head -n 1)

@oilbeater oilbeater merged commit 6240894 into master Feb 26, 2026
146 of 147 checks passed
@oilbeater oilbeater deleted the fix/kubectl-ko-leader-election-retry branch February 26, 2026 15:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working size:M This PR changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant