fix(e2e): replace hard sleeps and unretried checks with WaitUntil polling#6362
fix(e2e): replace hard sleeps and unretried checks with WaitUntil polling#6362
Conversation
…ling Three E2E tests had flaky patterns identified during analysis of recent fixes (#6343, #6345, #6347, #6349, #6355, #6358): node/node.go: - "should access overlay pods using node ip": RunHostCmdOrDie with no retry; OVN join subnet routes may not yet be installed on the host network stack when the pod turns Ready. Replace with WaitUntil (30s) and increase curl timeout from 2s to 5s. - "should access overlay services using node ip": same issue plus no endpoint readiness wait before the curl. Add endpoint WaitUntil (1m) then wrap the connectivity check with WaitUntil (30s). underlay/underlay.go: - "should be able to detect conflict vlan subnet": two time.Sleep(10s) calls used fixed waits instead of condition-based polling. Replace the first sleep with WaitUntil waiting for conflictVlan1 to be processed (non-conflicting), and the second with WaitUntil waiting for conflictVlan2.Status.Conflict to become true. - checkU2OFilterOpenFlowExist: manual "for range 3" retry loop with a hard 5s sleep. Replace with a deadline-based loop (30s, 2s interval) using time.Now().After for a clean timeout boundary. subnet/subnet.go: - "should detect MAC address conflict": time.Sleep(2s) before a one-shot event list is too short on a loaded cluster. Replace with WaitUntil (500ms interval, 15s timeout) polling for the AddressConflict Warning event. Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
… test After patching ProviderNetwork with AutoCreateVlanSubinterfaces=false, the test verified existing subinterfaces were not deleted by sleeping 5 seconds inside the per-node loop (N nodes × 5s wasted time) and then doing an immediate assertion. This has two problems: 1. The sleep runs once per node, wasting N*5s even when the daemon reconciles quickly. 2. If the controller deletes a subinterface after the 5s sleep window, ExpectTrue produces a false-positive pass. Replace with WaitUntil (2s interval, 30s timeout) per node: the check passes on the first poll if subinterfaces are stable (common case), and retries up to 30s if there is any transient disruption, eliminating both issues. Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the reliability and efficiency of several end-to-end tests by replacing brittle, time-based waits and single-shot assertions with dynamic, polling-based Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request is a good step towards improving the E2E test stability by replacing hard-coded sleeps and one-shot checks with robust polling using WaitUntil. The changes across node.go, subnet.go, underlay.go, and vlan_subinterfaces.go are well-aligned with this goal.
I've identified a couple of areas for improvement related to the usage of the framework.WaitUntil helper. One is a high-severity issue where the polling interval is hardcoded and ignored, which could lead to incorrect test timing. The other is a medium-severity issue regarding context propagation best practices. Addressing these will make the new polling logic even more correct and maintainable.
| framework.WaitUntil(500*time.Millisecond, 15*time.Second, func(_ context.Context) (bool, error) { | ||
| events, err := f.EventClient().List(context.Background(), metav1.ListOptions{ | ||
| FieldSelector: fmt.Sprintf("involvedObject.name=%s,involvedObject.namespace=%s", podName, namespaceName), | ||
| }) | ||
| if err != nil { | ||
| return false, err | ||
| } | ||
| } | ||
|
|
||
| framework.ExpectTrue(hasConflictError, "Should have conflict error events") | ||
| for _, event := range events.Items { | ||
| if event.Type == corev1.EventTypeWarning && strings.Contains(event.Message, "AddressConflict") { | ||
| framework.Logf("Found conflict event: %s", event.Message) | ||
| return true, nil | ||
| } | ||
| } | ||
| return false, nil | ||
| }, fmt.Sprintf("pod %s should have AddressConflict warning event", podName)) |
There was a problem hiding this comment.
The framework.WaitUntil function appears to have a hardcoded polling interval of 2 seconds, and it ignores the first argument (500*time.Millisecond in this case). This means the polling will happen every 2 seconds, not every 500ms as intended, which could affect test timing and reliability.
Additionally, the context passed to the polling function is ignored in favor of context.Background(). It's better practice to use the provided context for API calls within the polling function.
To address this, you might need to modify framework.WaitUntil to accept and use the interval parameter. A revised implementation could look like this:
// In test/e2e/framework/wait.go
func WaitUntil(interval, timeout time.Duration, cond func(context.Context) (bool, error), condDesc string) {
ginkgo.GinkgoHelper()
if err := wait.PollUntilContextTimeout(context.Background(), interval, timeout, false, cond); err != nil {
// ... error handling
}
}Then, you can update this call to use the provided context:
| framework.WaitUntil(500*time.Millisecond, 15*time.Second, func(_ context.Context) (bool, error) { | |
| events, err := f.EventClient().List(context.Background(), metav1.ListOptions{ | |
| FieldSelector: fmt.Sprintf("involvedObject.name=%s,involvedObject.namespace=%s", podName, namespaceName), | |
| }) | |
| if err != nil { | |
| return false, err | |
| } | |
| } | |
| framework.ExpectTrue(hasConflictError, "Should have conflict error events") | |
| for _, event := range events.Items { | |
| if event.Type == corev1.EventTypeWarning && strings.Contains(event.Message, "AddressConflict") { | |
| framework.Logf("Found conflict event: %s", event.Message) | |
| return true, nil | |
| } | |
| } | |
| return false, nil | |
| }, fmt.Sprintf("pod %s should have AddressConflict warning event", podName)) | |
| framework.WaitUntil(500*time.Millisecond, 15*time.Second, func(ctx context.Context) (bool, error) { | |
| events, err := f.EventClient().List(ctx, metav1.ListOptions{ | |
| FieldSelector: fmt.Sprintf("involvedObject.name=%s,involvedObject.namespace=%s", podName, namespaceName), | |
| }) | |
| if err != nil { | |
| return false, err | |
| } | |
| for _, event := range events.Items { | |
| if event.Type == corev1.EventTypeWarning && strings.Contains(event.Message, "AddressConflict") { | |
| framework.Logf("Found conflict event: %s", event.Message) | |
| return true, nil | |
| } | |
| } | |
| return false, nil | |
| }, fmt.Sprintf("pod %s should have AddressConflict warning event", podName)) |
| framework.WaitUntil(2*time.Second, time.Minute, func(_ context.Context) (bool, error) { | ||
| eps, err := cs.CoreV1().Endpoints(namespaceName).Get(context.TODO(), serviceName, metav1.GetOptions{}) | ||
| if err == nil { | ||
| for _, subset := range eps.Subsets { | ||
| if len(subset.Addresses) > 0 { | ||
| return true, nil | ||
| } | ||
| } | ||
| return false, nil | ||
| } | ||
| if k8serrors.IsNotFound(err) { | ||
| return false, nil | ||
| } | ||
| return false, err | ||
| }, fmt.Sprintf("endpoints %s has at least one ready address", serviceName)) |
There was a problem hiding this comment.
The context provided by WaitUntil is being ignored. It's better practice to use this context for the API call inside the polling function instead of context.TODO().
| framework.WaitUntil(2*time.Second, time.Minute, func(_ context.Context) (bool, error) { | |
| eps, err := cs.CoreV1().Endpoints(namespaceName).Get(context.TODO(), serviceName, metav1.GetOptions{}) | |
| if err == nil { | |
| for _, subset := range eps.Subsets { | |
| if len(subset.Addresses) > 0 { | |
| return true, nil | |
| } | |
| } | |
| return false, nil | |
| } | |
| if k8serrors.IsNotFound(err) { | |
| return false, nil | |
| } | |
| return false, err | |
| }, fmt.Sprintf("endpoints %s has at least one ready address", serviceName)) | |
| framework.WaitUntil(2*time.Second, time.Minute, func(ctx context.Context) (bool, error) { | |
| eps, err := cs.CoreV1().Endpoints(namespaceName).Get(ctx, serviceName, metav1.GetOptions{}) | |
| if err == nil { | |
| for _, subset := range eps.Subsets { | |
| if len(subset.Addresses) > 0 { | |
| return true, nil | |
| } | |
| } | |
| return false, nil | |
| } | |
| if k8serrors.IsNotFound(err) { | |
| return false, nil | |
| } | |
| return false, err | |
| }, fmt.Sprintf("endpoints %s has at least one ready address", serviceName)) |
…eady Root cause: handleAddVirtualIP called createOrUpdateVipCR to set Status.V4ip (triggering WaitToBeReady), but the finalizer was only added later in handleUpdateVirtualIP (triggered by a subsequent update event). This created a race window where CreateSync could return a VIP object with V4ip set but without the finalizer. Fix 1 (controller): In createOrUpdateVipCR's else branch, add the finalizer atomically in the same Update() call that sets spec/status, so the VIP is fully initialized in one API operation. Fix 2 (test framework): Update WaitToBeReady to require both an IP address AND the controller finalizer before declaring a VIP ready, ensuring tests only proceed with a fully-initialized VIP. Fix 3 (test): Add ginkgo.DeferCleanup for testVipName inside the It block so the VIP is deleted even on test failure, preventing the AfterEach subnetClient.DeleteSync from timing out. Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Summary
Follow-up to the recent series of E2E flakiness fixes (#6343, #6345, #6347, #6349, #6355, #6358, #6359). After auditing the remaining test code for the same failure patterns, four more sites were found:
node/node.go– two tests usedRunHostCmdOrDiewith no retry for overlay pod/service connectivity. OVN join-subnet routes and LB rules may not yet be installed on the host network stack when the pod/service turns Ready. The service test also had no endpoint-readiness wait before the curl.underlay/underlay.go– the "conflict vlan subnet" test used two unconditionaltime.Sleep(10s)calls followed by immediate assertions onStatus.Conflict. ThecheckU2OFilterOpenFlowExisthelper used a manualfor range 3retry loop with a hardtime.Sleep(5s), giving only 15 s of retry budget.subnet/subnet.go– the MAC-conflict test usedtime.Sleep(2s)before a one-shot event list query, which is too short on a loaded cluster.underlay/vlan_subinterfaces.go–time.Sleep(5s)ran inside a per-node loop (N nodes × 5 s wasted), followed by an immediateExpectTrue. This also had a false-positive window: if the controller deleted the subinterface after the 5 s mark, the assertion would pass incorrectly.Changes
node/node.go(pod test)RunHostCmdOrDie, curl 2 s timeoutWaitUntil(30s), curl 5 s timeoutnode/node.go(service test)RunHostCmdOrDieWaitUntil(1m)+ connectivityWaitUntil(30s)underlay/underlay.go(conflict vlan)time.Sleep(10s)× 2 + instant assertWaitUntil(30s)pollingStatus.Conflictunderlay/underlay.go(checkU2OFilterOpenFlowExist)for range 3+sleep 5s(15 s max)subnet/subnet.go(MAC conflict)time.Sleep(2s)+ one-shot event queryWaitUntil(15s, 500ms)polling for Warning eventunderlay/vlan_subinterfaces.gotime.Sleep(5s)per node in loopWaitUntil(30s)per node (passes immediately when stable)Test plan
[group:node]– should access overlay pods/services using node ip[group:underlay]– should be able to detect conflict vlan subnet[group:underlay]– U2O OpenFlow filter check (version < 1.13)[group:subnet]– should detect MAC address conflict[group:underlay]– VLAN subinterfaces autoCreateVlanSubinterfaces=false🤖 Generated with Claude Code