Skip to content

fix(e2e): wait for endpoints and retry service trace in kubectl-ko test#6358

Merged
oilbeater merged 1 commit intomasterfrom
fix/e2e-kubectl-ko-trace-service-race
Feb 27, 2026
Merged

fix(e2e): wait for endpoints and retry service trace in kubectl-ko test#6358
oilbeater merged 1 commit intomasterfrom
fix/e2e-kubectl-ko-trace-service-race

Conversation

@oilbeater
Copy link
Copy Markdown
Collaborator

Summary

  • The "kubectl ko trace ..." should work with network policy test flakes because ovn-trace is executed against a Service ClusterIP before the OVN load balancer rules are fully programmed
  • The test only waited for the ClusterIP to be assigned, but the async chain (EndpointSlice creation → kube-ovn LB VIP sync → OVN northd propagation) may not complete in time
  • Without LB rules, the trace routes the packet via default routing to a node instead of DNAT-ing to the target pod

Root Cause

When the Service is created, serviceClient.CreateSync only waits for len(s.Spec.ClusterIPs) != 0, which is assigned synchronously by kube-apiserver. However, OVN LB rules require three async steps:

  1. Kubernetes EndpointSlice controller detects Service + matching pod → creates EndpointSlice
  2. kube-ovn controller detects EndpointSlice → calls LoadBalancerAddVip() to write OVN NB LB rules
  3. OVN northd processes LB rules → updates southbound DB

If ovn-trace runs before step 3 completes, the Service ClusterIP is not DNAT'd and the packet follows the default route to the node.

Fix

Two layers of protection:

  1. Wait for Endpoints readiness: After Service creation, poll until the Endpoints object has at least one ready address (consistent with the pattern used in switch_lb_rule tests)
  2. Retry Service ClusterIP trace: Wrap the Service trace in framework.WaitUntil with a 30s timeout to cover the window between Endpoints readiness and OVN LB rule propagation

Test plan

  • Verify the kubectl-ko e2e test passes consistently in CI
  • Confirm no new lint issues (make lint passes with 0 issues)

🤖 Generated with Claude Code

The "kubectl ko trace should work with network policy" test flakes
because ovn-trace is executed against a Service ClusterIP before the
OVN load balancer rules are fully programmed. The test only waited for
the ClusterIP to be assigned, but the async chain (EndpointSlice
creation → kube-ovn LB VIP sync → OVN northd propagation) may not
complete in time, causing the trace to route to the node instead of
the target pod.

Fix with two layers of protection:
1. Wait for the Service Endpoints to have at least one ready address
   before proceeding.
2. Retry the Service ClusterIP trace (30s timeout) to cover the window
   between Endpoints readiness and OVN LB rule propagation.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. test automation tests labels Feb 27, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a flakiness issue in the kubectl ko trace e2e test, specifically when testing network policies with Service ClusterIPs. The root cause was identified as a race condition where ovn-trace was executed before OVN load balancer rules were fully propagated, leading to incorrect trace results. The changes introduce more robust waiting mechanisms, first for Kubernetes Endpoints readiness and then for the ovn-trace command itself, to ensure the OVN network state is stable before verification.

Highlights

  • Enhanced Service Readiness Check: Implemented a polling mechanism to wait for a Service's Endpoints object to have at least one ready address, addressing a timing issue where OVN load balancer rules might not be fully programmed.
  • Service Trace Retry Logic: Introduced a retry loop with a 30-second timeout for kubectl ko trace commands targeting Service ClusterIPs, ensuring that OVN LB rule propagation has sufficient time to complete before the trace is evaluated.
  • Refactored Output Verification: Extracted the logic for checking ovn-trace command output into a dedicated helper function for improved readability and maintainability.
Changelog
  • test/e2e/kube-ovn/kubectl-ko/kubectl-ko.go
    • Imported time and k8serrors packages.
    • Added a framework.WaitUntil block to poll for Service Endpoints readiness, ensuring at least one ready address.
    • Introduced a new helper function checkOutput to encapsulate the logic for verifying ovn-trace command output.
    • Replaced direct framework.ExpectContainSubstring calls with the new checkOutput function within the checkFunc.
    • Wrapped the kubectl ko trace execution for Service ClusterIPs in a framework.WaitUntil loop, retrying the trace for up to 30 seconds until the expected output is observed.
Activity
  • The pull request was created to fix a flaky e2e test related to kubectl ko trace and network policies.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request effectively addresses the flakiness in the kubectl ko trace e2e test by implementing two key improvements. First, it introduces a wait condition to ensure that service endpoints are ready before proceeding with the trace. Second, it wraps the service ClusterIP trace execution in a retry mechanism with a 30-second timeout, accounting for potential delays in OVN LB rule propagation. The refactoring of the output checking logic into a dedicated checkOutput function also improves code readability and maintainability. These changes directly tackle the root cause of the test instability, making the e2e tests more robust.

@oilbeater oilbeater merged commit 334e6c4 into master Feb 27, 2026
75 of 76 checks passed
@oilbeater oilbeater deleted the fix/e2e-kubectl-ko-trace-service-race branch February 27, 2026 06:46
oilbeater added a commit that referenced this pull request Feb 27, 2026
…ling

Three E2E tests had flaky patterns identified during analysis of recent
fixes (#6343, #6345, #6347, #6349, #6355, #6358):

node/node.go:
- "should access overlay pods using node ip": RunHostCmdOrDie with no
  retry; OVN join subnet routes may not yet be installed on the host
  network stack when the pod turns Ready. Replace with WaitUntil (30s)
  and increase curl timeout from 2s to 5s.
- "should access overlay services using node ip": same issue plus no
  endpoint readiness wait before the curl. Add endpoint WaitUntil (1m)
  then wrap the connectivity check with WaitUntil (30s).

underlay/underlay.go:
- "should be able to detect conflict vlan subnet": two time.Sleep(10s)
  calls used fixed waits instead of condition-based polling. Replace
  the first sleep with WaitUntil waiting for conflictVlan1 to be
  processed (non-conflicting), and the second with WaitUntil waiting
  for conflictVlan2.Status.Conflict to become true.
- checkU2OFilterOpenFlowExist: manual "for range 3" retry loop with a
  hard 5s sleep. Replace with a deadline-based loop (30s, 2s interval)
  using time.Now().After for a clean timeout boundary.

subnet/subnet.go:
- "should detect MAC address conflict": time.Sleep(2s) before a
  one-shot event list is too short on a loaded cluster. Replace with
  WaitUntil (500ms interval, 15s timeout) polling for the
  AddressConflict Warning event.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
oilbeater added a commit that referenced this pull request Feb 27, 2026
…ling (#6362)

* fix(e2e): replace hard sleeps and unretried checks with WaitUntil polling

Three E2E tests had flaky patterns identified during analysis of recent
fixes (#6343, #6345, #6347, #6349, #6355, #6358):

node/node.go:
- "should access overlay pods using node ip": RunHostCmdOrDie with no
  retry; OVN join subnet routes may not yet be installed on the host
  network stack when the pod turns Ready. Replace with WaitUntil (30s)
  and increase curl timeout from 2s to 5s.
- "should access overlay services using node ip": same issue plus no
  endpoint readiness wait before the curl. Add endpoint WaitUntil (1m)
  then wrap the connectivity check with WaitUntil (30s).

underlay/underlay.go:
- "should be able to detect conflict vlan subnet": two time.Sleep(10s)
  calls used fixed waits instead of condition-based polling. Replace
  the first sleep with WaitUntil waiting for conflictVlan1 to be
  processed (non-conflicting), and the second with WaitUntil waiting
  for conflictVlan2.Status.Conflict to become true.
- checkU2OFilterOpenFlowExist: manual "for range 3" retry loop with a
  hard 5s sleep. Replace with a deadline-based loop (30s, 2s interval)
  using time.Now().After for a clean timeout boundary.

subnet/subnet.go:
- "should detect MAC address conflict": time.Sleep(2s) before a
  one-shot event list is too short on a loaded cluster. Replace with
  WaitUntil (500ms interval, 15s timeout) polling for the
  AddressConflict Warning event.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(e2e): replace per-node sleep with WaitUntil in vlan_subinterfaces test

After patching ProviderNetwork with AutoCreateVlanSubinterfaces=false,
the test verified existing subinterfaces were not deleted by sleeping
5 seconds inside the per-node loop (N nodes × 5s wasted time) and then
doing an immediate assertion.

This has two problems:
1. The sleep runs once per node, wasting N*5s even when the daemon
   reconciles quickly.
2. If the controller deletes a subinterface after the 5s sleep window,
   ExpectTrue produces a false-positive pass.

Replace with WaitUntil (2s interval, 30s timeout) per node: the check
passes on the first poll if subinterfaces are stable (common case), and
retries up to 30s if there is any transient disruption, eliminating
both issues.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(e2e): eliminate race between VIP finalizer addition and WaitToBeReady

Root cause: handleAddVirtualIP called createOrUpdateVipCR to set
Status.V4ip (triggering WaitToBeReady), but the finalizer was only
added later in handleUpdateVirtualIP (triggered by a subsequent
update event). This created a race window where CreateSync could
return a VIP object with V4ip set but without the finalizer.

Fix 1 (controller): In createOrUpdateVipCR's else branch, add the
finalizer atomically in the same Update() call that sets spec/status,
so the VIP is fully initialized in one API operation.

Fix 2 (test framework): Update WaitToBeReady to require both an IP
address AND the controller finalizer before declaring a VIP ready,
ensuring tests only proceed with a fully-initialized VIP.

Fix 3 (test): Add ginkgo.DeferCleanup for testVipName inside the
It block so the VIP is deleted even on test failure, preventing
the AfterEach subnetClient.DeleteSync from timing out.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>

---------

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:M This PR changes 30-99 lines, ignoring generated files. test automation tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant