Skip to content

fix(e2e): replace hard sleeps and unretried checks with WaitUntil polling#6362

Merged
oilbeater merged 3 commits intomasterfrom
fix/e2e-flaky-wait-retry
Feb 27, 2026
Merged

fix(e2e): replace hard sleeps and unretried checks with WaitUntil polling#6362
oilbeater merged 3 commits intomasterfrom
fix/e2e-flaky-wait-retry

Conversation

@oilbeater
Copy link
Copy Markdown
Collaborator

Summary

Follow-up to the recent series of E2E flakiness fixes (#6343, #6345, #6347, #6349, #6355, #6358, #6359). After auditing the remaining test code for the same failure patterns, four more sites were found:

  • node/node.go – two tests used RunHostCmdOrDie with no retry for overlay pod/service connectivity. OVN join-subnet routes and LB rules may not yet be installed on the host network stack when the pod/service turns Ready. The service test also had no endpoint-readiness wait before the curl.
  • underlay/underlay.go – the "conflict vlan subnet" test used two unconditional time.Sleep(10s) calls followed by immediate assertions on Status.Conflict. The checkU2OFilterOpenFlowExist helper used a manual for range 3 retry loop with a hard time.Sleep(5s), giving only 15 s of retry budget.
  • subnet/subnet.go – the MAC-conflict test used time.Sleep(2s) before a one-shot event list query, which is too short on a loaded cluster.
  • underlay/vlan_subinterfaces.gotime.Sleep(5s) ran inside a per-node loop (N nodes × 5 s wasted), followed by an immediate ExpectTrue. This also had a false-positive window: if the controller deleted the subinterface after the 5 s mark, the assertion would pass incorrectly.

Changes

File Before After
node/node.go (pod test) RunHostCmdOrDie, curl 2 s timeout WaitUntil(30s), curl 5 s timeout
node/node.go (service test) no endpoint wait + RunHostCmdOrDie endpoint WaitUntil(1m) + connectivity WaitUntil(30s)
underlay/underlay.go (conflict vlan) time.Sleep(10s) × 2 + instant assert WaitUntil(30s) polling Status.Conflict
underlay/underlay.go (checkU2OFilterOpenFlowExist) for range 3 + sleep 5s (15 s max) deadline loop, 30 s / 2 s interval
subnet/subnet.go (MAC conflict) time.Sleep(2s) + one-shot event query WaitUntil(15s, 500ms) polling for Warning event
underlay/vlan_subinterfaces.go time.Sleep(5s) per node in loop WaitUntil(30s) per node (passes immediately when stable)

Test plan

  • E2E: [group:node] – should access overlay pods/services using node ip
  • E2E: [group:underlay] – should be able to detect conflict vlan subnet
  • E2E: [group:underlay] – U2O OpenFlow filter check (version < 1.13)
  • E2E: [group:subnet] – should detect MAC address conflict
  • E2E: [group:underlay] – VLAN subinterfaces autoCreateVlanSubinterfaces=false

🤖 Generated with Claude Code

oilbeater and others added 2 commits February 27, 2026 07:42
…ling

Three E2E tests had flaky patterns identified during analysis of recent
fixes (#6343, #6345, #6347, #6349, #6355, #6358):

node/node.go:
- "should access overlay pods using node ip": RunHostCmdOrDie with no
  retry; OVN join subnet routes may not yet be installed on the host
  network stack when the pod turns Ready. Replace with WaitUntil (30s)
  and increase curl timeout from 2s to 5s.
- "should access overlay services using node ip": same issue plus no
  endpoint readiness wait before the curl. Add endpoint WaitUntil (1m)
  then wrap the connectivity check with WaitUntil (30s).

underlay/underlay.go:
- "should be able to detect conflict vlan subnet": two time.Sleep(10s)
  calls used fixed waits instead of condition-based polling. Replace
  the first sleep with WaitUntil waiting for conflictVlan1 to be
  processed (non-conflicting), and the second with WaitUntil waiting
  for conflictVlan2.Status.Conflict to become true.
- checkU2OFilterOpenFlowExist: manual "for range 3" retry loop with a
  hard 5s sleep. Replace with a deadline-based loop (30s, 2s interval)
  using time.Now().After for a clean timeout boundary.

subnet/subnet.go:
- "should detect MAC address conflict": time.Sleep(2s) before a
  one-shot event list is too short on a loaded cluster. Replace with
  WaitUntil (500ms interval, 15s timeout) polling for the
  AddressConflict Warning event.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
… test

After patching ProviderNetwork with AutoCreateVlanSubinterfaces=false,
the test verified existing subinterfaces were not deleted by sleeping
5 seconds inside the per-node loop (N nodes × 5s wasted time) and then
doing an immediate assertion.

This has two problems:
1. The sleep runs once per node, wasting N*5s even when the daemon
   reconciles quickly.
2. If the controller deletes a subinterface after the 5s sleep window,
   ExpectTrue produces a false-positive pass.

Replace with WaitUntil (2s interval, 30s timeout) per node: the check
passes on the first poll if subinterfaces are stable (common case), and
retries up to 30s if there is any transient disruption, eliminating
both issues.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. test automation tests labels Feb 27, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the reliability and efficiency of several end-to-end tests by replacing brittle, time-based waits and single-shot assertions with dynamic, polling-based WaitUntil mechanisms. These changes address identified flakiness patterns, ensuring that tests correctly account for asynchronous system states and reducing overall test execution time by avoiding unnecessary delays.

Highlights

  • Enhanced E2E Test Stability: Replaced hardcoded time.Sleep calls and unretried checks with WaitUntil polling mechanisms across several E2E tests to improve reliability.
  • Improved Pod/Service Connectivity Checks: Updated node/node.go to use WaitUntil for more reliable pod and service connectivity assertions, including an explicit wait for service endpoints.
  • Robust VLAN Conflict Detection: Modified underlay/underlay.go to use WaitUntil for detecting VLAN conflicts and improved the OpenFlow filter existence check with a deadline-based retry.
  • Reliable MAC Conflict Event Polling: Changed subnet/subnet.go to poll for AddressConflict warning events, eliminating a fixed time.Sleep.
  • Accurate VLAN Subinterface Verification: Updated underlay/vlan_subinterfaces.go to use WaitUntil for verifying VLAN subinterface existence, preventing false positives and reducing test duration.
Changelog
  • test/e2e/kube-ovn/node/node.go
    • Imported k8serrors and metav1 for Kubernetes API error handling and metadata.
    • Increased curl command timeouts from 2 seconds to 5 seconds for connectivity checks.
    • Replaced e2epodoutput.RunHostCmdOrDie with framework.WaitUntil for pod connectivity, allowing retries.
    • Introduced a framework.WaitUntil block to ensure service endpoints are ready before proceeding with service connectivity tests.
    • Replaced e2epodoutput.RunHostCmdOrDie with framework.WaitUntil for service connectivity, allowing retries.
  • test/e2e/kube-ovn/subnet/subnet.go
    • Replaced a fixed time.Sleep(2 * time.Second) and a single event list query with a framework.WaitUntil loop that polls for AddressConflict warning events.
  • test/e2e/kube-ovn/underlay/underlay.go
    • Replaced time.Sleep(10 * time.Second) with framework.WaitUntil to poll for the initial VLAN to be non-conflicting before creating a second, conflicting VLAN.
    • Replaced another time.Sleep(10 * time.Second) with framework.WaitUntil to poll for the second VLAN to be detected as conflicting.
    • Refactored the checkU2OFilterOpenFlowExist function to use a deadline-based retry loop (30s total, 2s interval) instead of a fixed 3-attempt loop with 5-second sleeps, and updated associated error messages.
  • test/e2e/kube-ovn/underlay/vlan_subinterfaces.go
    • Replaced time.Sleep(5 * time.Second) within a node loop with framework.WaitUntil to poll for the existence of VLAN subinterfaces, improving efficiency and accuracy.
Activity
  • No human activity (comments, reviews, etc.) has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a good step towards improving the E2E test stability by replacing hard-coded sleeps and one-shot checks with robust polling using WaitUntil. The changes across node.go, subnet.go, underlay.go, and vlan_subinterfaces.go are well-aligned with this goal.

I've identified a couple of areas for improvement related to the usage of the framework.WaitUntil helper. One is a high-severity issue where the polling interval is hardcoded and ignored, which could lead to incorrect test timing. The other is a medium-severity issue regarding context propagation best practices. Addressing these will make the new polling logic even more correct and maintainable.

Comment on lines +1485 to +1499
framework.WaitUntil(500*time.Millisecond, 15*time.Second, func(_ context.Context) (bool, error) {
events, err := f.EventClient().List(context.Background(), metav1.ListOptions{
FieldSelector: fmt.Sprintf("involvedObject.name=%s,involvedObject.namespace=%s", podName, namespaceName),
})
if err != nil {
return false, err
}
}

framework.ExpectTrue(hasConflictError, "Should have conflict error events")
for _, event := range events.Items {
if event.Type == corev1.EventTypeWarning && strings.Contains(event.Message, "AddressConflict") {
framework.Logf("Found conflict event: %s", event.Message)
return true, nil
}
}
return false, nil
}, fmt.Sprintf("pod %s should have AddressConflict warning event", podName))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The framework.WaitUntil function appears to have a hardcoded polling interval of 2 seconds, and it ignores the first argument (500*time.Millisecond in this case). This means the polling will happen every 2 seconds, not every 500ms as intended, which could affect test timing and reliability.

Additionally, the context passed to the polling function is ignored in favor of context.Background(). It's better practice to use the provided context for API calls within the polling function.

To address this, you might need to modify framework.WaitUntil to accept and use the interval parameter. A revised implementation could look like this:

// In test/e2e/framework/wait.go
func WaitUntil(interval, timeout time.Duration, cond func(context.Context) (bool, error), condDesc string) {
    ginkgo.GinkgoHelper()
    if err := wait.PollUntilContextTimeout(context.Background(), interval, timeout, false, cond); err != nil {
        // ... error handling
    }
}

Then, you can update this call to use the provided context:

Suggested change
framework.WaitUntil(500*time.Millisecond, 15*time.Second, func(_ context.Context) (bool, error) {
events, err := f.EventClient().List(context.Background(), metav1.ListOptions{
FieldSelector: fmt.Sprintf("involvedObject.name=%s,involvedObject.namespace=%s", podName, namespaceName),
})
if err != nil {
return false, err
}
}
framework.ExpectTrue(hasConflictError, "Should have conflict error events")
for _, event := range events.Items {
if event.Type == corev1.EventTypeWarning && strings.Contains(event.Message, "AddressConflict") {
framework.Logf("Found conflict event: %s", event.Message)
return true, nil
}
}
return false, nil
}, fmt.Sprintf("pod %s should have AddressConflict warning event", podName))
framework.WaitUntil(500*time.Millisecond, 15*time.Second, func(ctx context.Context) (bool, error) {
events, err := f.EventClient().List(ctx, metav1.ListOptions{
FieldSelector: fmt.Sprintf("involvedObject.name=%s,involvedObject.namespace=%s", podName, namespaceName),
})
if err != nil {
return false, err
}
for _, event := range events.Items {
if event.Type == corev1.EventTypeWarning && strings.Contains(event.Message, "AddressConflict") {
framework.Logf("Found conflict event: %s", event.Message)
return true, nil
}
}
return false, nil
}, fmt.Sprintf("pod %s should have AddressConflict warning event", podName))

Comment on lines +181 to +195
framework.WaitUntil(2*time.Second, time.Minute, func(_ context.Context) (bool, error) {
eps, err := cs.CoreV1().Endpoints(namespaceName).Get(context.TODO(), serviceName, metav1.GetOptions{})
if err == nil {
for _, subset := range eps.Subsets {
if len(subset.Addresses) > 0 {
return true, nil
}
}
return false, nil
}
if k8serrors.IsNotFound(err) {
return false, nil
}
return false, err
}, fmt.Sprintf("endpoints %s has at least one ready address", serviceName))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The context provided by WaitUntil is being ignored. It's better practice to use this context for the API call inside the polling function instead of context.TODO().

Suggested change
framework.WaitUntil(2*time.Second, time.Minute, func(_ context.Context) (bool, error) {
eps, err := cs.CoreV1().Endpoints(namespaceName).Get(context.TODO(), serviceName, metav1.GetOptions{})
if err == nil {
for _, subset := range eps.Subsets {
if len(subset.Addresses) > 0 {
return true, nil
}
}
return false, nil
}
if k8serrors.IsNotFound(err) {
return false, nil
}
return false, err
}, fmt.Sprintf("endpoints %s has at least one ready address", serviceName))
framework.WaitUntil(2*time.Second, time.Minute, func(ctx context.Context) (bool, error) {
eps, err := cs.CoreV1().Endpoints(namespaceName).Get(ctx, serviceName, metav1.GetOptions{})
if err == nil {
for _, subset := range eps.Subsets {
if len(subset.Addresses) > 0 {
return true, nil
}
}
return false, nil
}
if k8serrors.IsNotFound(err) {
return false, nil
}
return false, err
}, fmt.Sprintf("endpoints %s has at least one ready address", serviceName))

…eady

Root cause: handleAddVirtualIP called createOrUpdateVipCR to set
Status.V4ip (triggering WaitToBeReady), but the finalizer was only
added later in handleUpdateVirtualIP (triggered by a subsequent
update event). This created a race window where CreateSync could
return a VIP object with V4ip set but without the finalizer.

Fix 1 (controller): In createOrUpdateVipCR's else branch, add the
finalizer atomically in the same Update() call that sets spec/status,
so the VIP is fully initialized in one API operation.

Fix 2 (test framework): Update WaitToBeReady to require both an IP
address AND the controller finalizer before declaring a VIP ready,
ensuring tests only proceed with a fully-initialized VIP.

Fix 3 (test): Add ginkgo.DeferCleanup for testVipName inside the
It block so the VIP is deleted even on test failure, preventing
the AfterEach subnetClient.DeleteSync from timing out.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
@oilbeater oilbeater merged commit c4eeb1d into master Feb 27, 2026
76 checks passed
@oilbeater oilbeater deleted the fix/e2e-flaky-wait-retry branch February 27, 2026 09:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:L This PR changes 100-499 lines, ignoring generated files. test automation tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant