fix: add retry logic to SLR e2e connectivity test to fix flaky failure#6343
fix: add retry logic to SLR e2e connectivity test to fix flaky failure#6343
Conversation
The "SLR with default provider" e2e test intermittently fails with curl timeout (exit code 28) when testing IPv6 connectivity. This happens because the test checks connectivity immediately after endpoints are created, without waiting for OVN LB rules to propagate to the data plane. Replace the single-shot RunHostCmdOrDie with a WaitUntil retry loop (2s interval, 30s timeout) using RunHostCmd, and increase curl timeout from 2s to 5s. This aligns with the retry pattern used in other network connectivity tests (network-policy, service, lb-svc). Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary of ChangesHello @oilbeater, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the robustness of the Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request addresses a flaky e2e test by introducing a retry mechanism in the curlSvc function. The implementation replaces a single command execution with a framework.WaitUntil loop, which will attempt to connect to the service for up to 30 seconds. Additionally, the curl command's timeout is increased to 5 seconds. These changes are designed to handle delays in OVN load balancer rule propagation. The implementation appears correct for the stated purpose.
…ling Three E2E tests had flaky patterns identified during analysis of recent fixes (#6343, #6345, #6347, #6349, #6355, #6358): node/node.go: - "should access overlay pods using node ip": RunHostCmdOrDie with no retry; OVN join subnet routes may not yet be installed on the host network stack when the pod turns Ready. Replace with WaitUntil (30s) and increase curl timeout from 2s to 5s. - "should access overlay services using node ip": same issue plus no endpoint readiness wait before the curl. Add endpoint WaitUntil (1m) then wrap the connectivity check with WaitUntil (30s). underlay/underlay.go: - "should be able to detect conflict vlan subnet": two time.Sleep(10s) calls used fixed waits instead of condition-based polling. Replace the first sleep with WaitUntil waiting for conflictVlan1 to be processed (non-conflicting), and the second with WaitUntil waiting for conflictVlan2.Status.Conflict to become true. - checkU2OFilterOpenFlowExist: manual "for range 3" retry loop with a hard 5s sleep. Replace with a deadline-based loop (30s, 2s interval) using time.Now().After for a clean timeout boundary. subnet/subnet.go: - "should detect MAC address conflict": time.Sleep(2s) before a one-shot event list is too short on a loaded cluster. Replace with WaitUntil (500ms interval, 15s timeout) polling for the AddressConflict Warning event. Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ling (#6362) * fix(e2e): replace hard sleeps and unretried checks with WaitUntil polling Three E2E tests had flaky patterns identified during analysis of recent fixes (#6343, #6345, #6347, #6349, #6355, #6358): node/node.go: - "should access overlay pods using node ip": RunHostCmdOrDie with no retry; OVN join subnet routes may not yet be installed on the host network stack when the pod turns Ready. Replace with WaitUntil (30s) and increase curl timeout from 2s to 5s. - "should access overlay services using node ip": same issue plus no endpoint readiness wait before the curl. Add endpoint WaitUntil (1m) then wrap the connectivity check with WaitUntil (30s). underlay/underlay.go: - "should be able to detect conflict vlan subnet": two time.Sleep(10s) calls used fixed waits instead of condition-based polling. Replace the first sleep with WaitUntil waiting for conflictVlan1 to be processed (non-conflicting), and the second with WaitUntil waiting for conflictVlan2.Status.Conflict to become true. - checkU2OFilterOpenFlowExist: manual "for range 3" retry loop with a hard 5s sleep. Replace with a deadline-based loop (30s, 2s interval) using time.Now().After for a clean timeout boundary. subnet/subnet.go: - "should detect MAC address conflict": time.Sleep(2s) before a one-shot event list is too short on a loaded cluster. Replace with WaitUntil (500ms interval, 15s timeout) polling for the AddressConflict Warning event. Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(e2e): replace per-node sleep with WaitUntil in vlan_subinterfaces test After patching ProviderNetwork with AutoCreateVlanSubinterfaces=false, the test verified existing subinterfaces were not deleted by sleeping 5 seconds inside the per-node loop (N nodes × 5s wasted time) and then doing an immediate assertion. This has two problems: 1. The sleep runs once per node, wasting N*5s even when the daemon reconciles quickly. 2. If the controller deletes a subinterface after the 5s sleep window, ExpectTrue produces a false-positive pass. Replace with WaitUntil (2s interval, 30s timeout) per node: the check passes on the first poll if subinterfaces are stable (common case), and retries up to 30s if there is any transient disruption, eliminating both issues. Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(e2e): eliminate race between VIP finalizer addition and WaitToBeReady Root cause: handleAddVirtualIP called createOrUpdateVipCR to set Status.V4ip (triggering WaitToBeReady), but the finalizer was only added later in handleUpdateVirtualIP (triggered by a subsequent update event). This created a race window where CreateSync could return a VIP object with V4ip set but without the finalizer. Fix 1 (controller): In createOrUpdateVipCR's else branch, add the finalizer atomically in the same Update() call that sets spec/status, so the VIP is fully initialized in one API operation. Fix 2 (test framework): Update WaitToBeReady to require both an IP address AND the controller finalizer before declaring a VIP ready, ensuring tests only proceed with a fully-initialized VIP. Fix 3 (test): Add ginkgo.DeferCleanup for testVipName inside the It block so the VIP is deleted even on test failure, preventing the AfterEach subnetClient.DeleteSync from timing out. Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> --------- Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Summary
RunHostCmdOrDiewithWaitUntilretry loop (2s interval, 30s timeout) incurlSvc()Root Cause
The test checks connectivity immediately after Kubernetes Endpoints are created, but OVN LB rules need additional time to propagate through the data plane (NB → SB → ovn-controller → OVS datapath). The third test phase (Endpoint SLR, port 8092) is most affected due to higher rule complexity and accumulated system load from previous phases. IPv6 exacerbates the issue due to NDP and longer rule processing times.
Test plan
🤖 Generated with Claude Code