Skip to content

fix(e2e): wait for DNSNameResolver ready before CNP connectivity checks#6349

Merged
oilbeater merged 2 commits intomasterfrom
fix/cnp-domain-e2e-flaky
Feb 26, 2026
Merged

fix(e2e): wait for DNSNameResolver ready before CNP connectivity checks#6349
oilbeater merged 2 commits intomasterfrom
fix/cnp-domain-e2e-flaky

Conversation

@oilbeater
Copy link
Copy Markdown
Collaborator

Summary

  • Add waitForDNSResolversReady() helper to explicitly wait for DNSNameResolver CRs to have resolved addresses before running connectivity checks, fixing flaky CNP domain e2e test
  • Increase default retry parameters from 20×2s to 30×3s as a safety net for slow DNS resolution
  • Root cause: when CNP rules are created/updated, OVN ACL address sets may be empty initially since the external dnsnameresolver component needs time to complete the async DNS resolution chain

Test plan

  • Run CNP domain e2e test suite multiple times to verify no flaky failures
  • Verify waitForDNSResolversReady correctly waits for DNS resolution before connectivity checks
  • Confirm existing test behavior is preserved (deny/allow rules still work correctly)

🤖 Generated with Claude Code

The CNP domain-based e2e test was flaky because connectivity checks
started before the async DNS resolution chain completed. When CNP rules
are created/updated, the OVN ACL address sets may be empty initially
since the external dnsnameresolver component needs time to resolve
domains and populate DNSNameResolver CR Status.

Add waitForDNSResolversReady() to explicitly wait for all DNSNameResolver
CRs to have resolved addresses before running connectivity checks.
Also increase default retry parameters from 20×2s to 30×3s as a safety
net for slow DNS resolution.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
@dosubot dosubot bot added the size:M This PR changes 30-99 lines, ignoring generated files. label Feb 26, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @oilbeater, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves an issue with flaky end-to-end tests related to Cluster Network Policies (CNP) and domain-based rules. The core problem stemmed from network connectivity checks occurring before the underlying DNS resolution for domain names was complete, leading to inconsistent test results. The solution ensures that all necessary DNS name resolutions are finalized and address sets are populated before any connectivity assertions are made, thereby stabilizing the test suite and improving its reliability.

Highlights

  • Flaky E2E Test Fix: Addressed a flaky CNP domain e2e test by explicitly waiting for DNSNameResolver CRs to resolve addresses before performing connectivity checks.
  • DNS Resolution Wait Helper: Introduced a new helper function, waitForDNSResolversReady(), to ensure OVN ACL address sets are populated before network connectivity validations.
  • Increased Retry Parameters: Adjusted default retry parameters for connectivity checks from 20x2s to 30x3s, providing a safety net for potentially slow DNS resolution.
  • Root Cause Identification: Identified the root cause of flakiness as initially empty OVN ACL address sets when CNP rules are created/updated, due to the asynchronous nature of external DNSNameResolver component's resolution.
Changelog
  • test/e2e/cnp-domain/e2e_test.go
    • Imported the k8s.io/apimachinery/pkg/util/wait package.
    • Modified the testNetworkConnectivity function to increase retry attempts and duration for connectivity checks.
    • Added a new helper function waitForDNSResolversReady to poll for DNSNameResolver CRs to have resolved addresses.
    • Integrated calls to waitForDNSResolversReady into various CNP domain test cases to ensure DNS resolution completion before connectivity tests.
Activity
  • The pull request description indicates it was generated with Claude Code.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@dosubot dosubot bot added the test automation tests label Feb 26, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request effectively addresses the flakiness in CNP domain e2e tests by introducing a waitForDNSResolversReady helper function. This function ensures that DNSNameResolver CRs have resolved addresses before proceeding with connectivity checks, which is crucial for the OVN ACL address sets to be correctly populated. The increase in default retry parameters for network connectivity checks also provides a valuable safety net for scenarios involving slow DNS resolution. These changes significantly improve the reliability and stability of the e2e test suite.

dnsClient := f.DNSNameResolverClient()
labelSelector := fmt.Sprintf("anp=%s", name)

err := wait.PollUntilContextTimeout(context.TODO(), 2*time.Second, 60*time.Second, true, func(_ context.Context) (bool, error) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using context.TODO() in wait.PollUntilContextTimeout is a common pattern, but it's generally better practice to use a cancellable context derived from a parent context (e.g., f.Context). This ensures proper context propagation and allows for earlier cancellation if the parent context is cancelled, leading to more robust test behavior.

Suggested change
err := wait.PollUntilContextTimeout(context.TODO(), 2*time.Second, 60*time.Second, true, func(_ context.Context) (bool, error) {
err := wait.PollUntilContextTimeout(f.Context, 2*time.Second, 60*time.Second, true, func(_ context.Context) (bool, error) {

…lation

The dnsnameresolver CoreDNS plugin populates Status.ResolvedNames
reactively (only when actual DNS queries flow through CoreDNS), not
proactively. The previous waitForDNSResolversReady checked for non-empty
Status.ResolvedNames before any connectivity test, creating a deadlock:
no DNS query happens → Status never populated → 60s timeout → test fails.

Replace with waitForDNSResolversCreated that only verifies the CR exists,
ensuring the controller has processed the CNP. The connectivity retry
logic (30 attempts × 3s) naturally handles the async chain:
  DNS query → plugin intercepts → Status updated → address set updated → ACL applied.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
@oilbeater oilbeater merged commit 3dd3fff into master Feb 26, 2026
74 of 76 checks passed
@oilbeater oilbeater deleted the fix/cnp-domain-e2e-flaky branch February 26, 2026 11:53
oilbeater added a commit that referenced this pull request Feb 27, 2026
…ling

Three E2E tests had flaky patterns identified during analysis of recent
fixes (#6343, #6345, #6347, #6349, #6355, #6358):

node/node.go:
- "should access overlay pods using node ip": RunHostCmdOrDie with no
  retry; OVN join subnet routes may not yet be installed on the host
  network stack when the pod turns Ready. Replace with WaitUntil (30s)
  and increase curl timeout from 2s to 5s.
- "should access overlay services using node ip": same issue plus no
  endpoint readiness wait before the curl. Add endpoint WaitUntil (1m)
  then wrap the connectivity check with WaitUntil (30s).

underlay/underlay.go:
- "should be able to detect conflict vlan subnet": two time.Sleep(10s)
  calls used fixed waits instead of condition-based polling. Replace
  the first sleep with WaitUntil waiting for conflictVlan1 to be
  processed (non-conflicting), and the second with WaitUntil waiting
  for conflictVlan2.Status.Conflict to become true.
- checkU2OFilterOpenFlowExist: manual "for range 3" retry loop with a
  hard 5s sleep. Replace with a deadline-based loop (30s, 2s interval)
  using time.Now().After for a clean timeout boundary.

subnet/subnet.go:
- "should detect MAC address conflict": time.Sleep(2s) before a
  one-shot event list is too short on a loaded cluster. Replace with
  WaitUntil (500ms interval, 15s timeout) polling for the
  AddressConflict Warning event.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
oilbeater added a commit that referenced this pull request Feb 27, 2026
…ling (#6362)

* fix(e2e): replace hard sleeps and unretried checks with WaitUntil polling

Three E2E tests had flaky patterns identified during analysis of recent
fixes (#6343, #6345, #6347, #6349, #6355, #6358):

node/node.go:
- "should access overlay pods using node ip": RunHostCmdOrDie with no
  retry; OVN join subnet routes may not yet be installed on the host
  network stack when the pod turns Ready. Replace with WaitUntil (30s)
  and increase curl timeout from 2s to 5s.
- "should access overlay services using node ip": same issue plus no
  endpoint readiness wait before the curl. Add endpoint WaitUntil (1m)
  then wrap the connectivity check with WaitUntil (30s).

underlay/underlay.go:
- "should be able to detect conflict vlan subnet": two time.Sleep(10s)
  calls used fixed waits instead of condition-based polling. Replace
  the first sleep with WaitUntil waiting for conflictVlan1 to be
  processed (non-conflicting), and the second with WaitUntil waiting
  for conflictVlan2.Status.Conflict to become true.
- checkU2OFilterOpenFlowExist: manual "for range 3" retry loop with a
  hard 5s sleep. Replace with a deadline-based loop (30s, 2s interval)
  using time.Now().After for a clean timeout boundary.

subnet/subnet.go:
- "should detect MAC address conflict": time.Sleep(2s) before a
  one-shot event list is too short on a loaded cluster. Replace with
  WaitUntil (500ms interval, 15s timeout) polling for the
  AddressConflict Warning event.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(e2e): replace per-node sleep with WaitUntil in vlan_subinterfaces test

After patching ProviderNetwork with AutoCreateVlanSubinterfaces=false,
the test verified existing subinterfaces were not deleted by sleeping
5 seconds inside the per-node loop (N nodes × 5s wasted time) and then
doing an immediate assertion.

This has two problems:
1. The sleep runs once per node, wasting N*5s even when the daemon
   reconciles quickly.
2. If the controller deletes a subinterface after the 5s sleep window,
   ExpectTrue produces a false-positive pass.

Replace with WaitUntil (2s interval, 30s timeout) per node: the check
passes on the first poll if subinterfaces are stable (common case), and
retries up to 30s if there is any transient disruption, eliminating
both issues.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(e2e): eliminate race between VIP finalizer addition and WaitToBeReady

Root cause: handleAddVirtualIP called createOrUpdateVipCR to set
Status.V4ip (triggering WaitToBeReady), but the finalizer was only
added later in handleUpdateVirtualIP (triggered by a subsequent
update event). This created a race window where CreateSync could
return a VIP object with V4ip set but without the finalizer.

Fix 1 (controller): In createOrUpdateVipCR's else branch, add the
finalizer atomically in the same Update() call that sets spec/status,
so the VIP is fully initialized in one API operation.

Fix 2 (test framework): Update WaitToBeReady to require both an IP
address AND the controller finalizer before declaring a VIP ready,
ensuring tests only proceed with a fully-initialized VIP.

Fix 3 (test): Add ginkgo.DeferCleanup for testVipName inside the
It block so the VIP is deleted even on test failure, preventing
the AfterEach subnetClient.DeleteSync from timing out.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>

---------

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:M This PR changes 30-99 lines, ignoring generated files. test automation tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant