fix(e2e): wait for endpoints and retry service trace in kubectl-ko test#6358
fix(e2e): wait for endpoints and retry service trace in kubectl-ko test#6358
Conversation
The "kubectl ko trace should work with network policy" test flakes because ovn-trace is executed against a Service ClusterIP before the OVN load balancer rules are fully programmed. The test only waited for the ClusterIP to be assigned, but the async chain (EndpointSlice creation → kube-ovn LB VIP sync → OVN northd propagation) may not complete in time, causing the trace to route to the node instead of the target pod. Fix with two layers of protection: 1. Wait for the Service Endpoints to have at least one ready address before proceeding. 2. Retry the Service ClusterIP trace (30s timeout) to cover the window between Endpoints readiness and OVN LB rule propagation. Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a flakiness issue in the Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
The pull request effectively addresses the flakiness in the kubectl ko trace e2e test by implementing two key improvements. First, it introduces a wait condition to ensure that service endpoints are ready before proceeding with the trace. Second, it wraps the service ClusterIP trace execution in a retry mechanism with a 30-second timeout, accounting for potential delays in OVN LB rule propagation. The refactoring of the output checking logic into a dedicated checkOutput function also improves code readability and maintainability. These changes directly tackle the root cause of the test instability, making the e2e tests more robust.
…ling Three E2E tests had flaky patterns identified during analysis of recent fixes (#6343, #6345, #6347, #6349, #6355, #6358): node/node.go: - "should access overlay pods using node ip": RunHostCmdOrDie with no retry; OVN join subnet routes may not yet be installed on the host network stack when the pod turns Ready. Replace with WaitUntil (30s) and increase curl timeout from 2s to 5s. - "should access overlay services using node ip": same issue plus no endpoint readiness wait before the curl. Add endpoint WaitUntil (1m) then wrap the connectivity check with WaitUntil (30s). underlay/underlay.go: - "should be able to detect conflict vlan subnet": two time.Sleep(10s) calls used fixed waits instead of condition-based polling. Replace the first sleep with WaitUntil waiting for conflictVlan1 to be processed (non-conflicting), and the second with WaitUntil waiting for conflictVlan2.Status.Conflict to become true. - checkU2OFilterOpenFlowExist: manual "for range 3" retry loop with a hard 5s sleep. Replace with a deadline-based loop (30s, 2s interval) using time.Now().After for a clean timeout boundary. subnet/subnet.go: - "should detect MAC address conflict": time.Sleep(2s) before a one-shot event list is too short on a loaded cluster. Replace with WaitUntil (500ms interval, 15s timeout) polling for the AddressConflict Warning event. Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ling (#6362) * fix(e2e): replace hard sleeps and unretried checks with WaitUntil polling Three E2E tests had flaky patterns identified during analysis of recent fixes (#6343, #6345, #6347, #6349, #6355, #6358): node/node.go: - "should access overlay pods using node ip": RunHostCmdOrDie with no retry; OVN join subnet routes may not yet be installed on the host network stack when the pod turns Ready. Replace with WaitUntil (30s) and increase curl timeout from 2s to 5s. - "should access overlay services using node ip": same issue plus no endpoint readiness wait before the curl. Add endpoint WaitUntil (1m) then wrap the connectivity check with WaitUntil (30s). underlay/underlay.go: - "should be able to detect conflict vlan subnet": two time.Sleep(10s) calls used fixed waits instead of condition-based polling. Replace the first sleep with WaitUntil waiting for conflictVlan1 to be processed (non-conflicting), and the second with WaitUntil waiting for conflictVlan2.Status.Conflict to become true. - checkU2OFilterOpenFlowExist: manual "for range 3" retry loop with a hard 5s sleep. Replace with a deadline-based loop (30s, 2s interval) using time.Now().After for a clean timeout boundary. subnet/subnet.go: - "should detect MAC address conflict": time.Sleep(2s) before a one-shot event list is too short on a loaded cluster. Replace with WaitUntil (500ms interval, 15s timeout) polling for the AddressConflict Warning event. Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(e2e): replace per-node sleep with WaitUntil in vlan_subinterfaces test After patching ProviderNetwork with AutoCreateVlanSubinterfaces=false, the test verified existing subinterfaces were not deleted by sleeping 5 seconds inside the per-node loop (N nodes × 5s wasted time) and then doing an immediate assertion. This has two problems: 1. The sleep runs once per node, wasting N*5s even when the daemon reconciles quickly. 2. If the controller deletes a subinterface after the 5s sleep window, ExpectTrue produces a false-positive pass. Replace with WaitUntil (2s interval, 30s timeout) per node: the check passes on the first poll if subinterfaces are stable (common case), and retries up to 30s if there is any transient disruption, eliminating both issues. Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(e2e): eliminate race between VIP finalizer addition and WaitToBeReady Root cause: handleAddVirtualIP called createOrUpdateVipCR to set Status.V4ip (triggering WaitToBeReady), but the finalizer was only added later in handleUpdateVirtualIP (triggered by a subsequent update event). This created a race window where CreateSync could return a VIP object with V4ip set but without the finalizer. Fix 1 (controller): In createOrUpdateVipCR's else branch, add the finalizer atomically in the same Update() call that sets spec/status, so the VIP is fully initialized in one API operation. Fix 2 (test framework): Update WaitToBeReady to require both an IP address AND the controller finalizer before declaring a VIP ready, ensuring tests only proceed with a fully-initialized VIP. Fix 3 (test): Add ginkgo.DeferCleanup for testVipName inside the It block so the VIP is deleted even on test failure, preventing the AfterEach subnetClient.DeleteSync from timing out. Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> --------- Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Summary
"kubectl ko trace ..." should work with network policytest flakes becauseovn-traceis executed against a Service ClusterIP before the OVN load balancer rules are fully programmedRoot Cause
When the Service is created,
serviceClient.CreateSynconly waits forlen(s.Spec.ClusterIPs) != 0, which is assigned synchronously by kube-apiserver. However, OVN LB rules require three async steps:LoadBalancerAddVip()to write OVN NB LB rulesIf
ovn-traceruns before step 3 completes, the Service ClusterIP is not DNAT'd and the packet follows the default route to the node.Fix
Two layers of protection:
switch_lb_ruletests)framework.WaitUntilwith a 30s timeout to cover the window between Endpoints readiness and OVN LB rule propagationTest plan
make lintpasses with 0 issues)🤖 Generated with Claude Code