Skip to content

fix: fix SLR e2e flaky failure caused by VIP not being cleaned up#6347

Merged
oilbeater merged 2 commits intomasterfrom
fix/slr-e2e-vip-cleanup
Feb 26, 2026
Merged

fix: fix SLR e2e flaky failure caused by VIP not being cleaned up#6347
oilbeater merged 2 commits intomasterfrom
fix/slr-e2e-vip-cleanup

Conversation

@oilbeater
Copy link
Copy Markdown
Collaborator

Root Cause

The SLR e2e test intermittently fails with subnet stuck in Terminating state (v4usingIPs: 1). The root cause is a race condition between the stsSvc deletion and the SLR deletion:

  1. In the e2e AfterEach, stsSvc was deleted before SLRs. This triggers the endpoint_slice controller to call LoadBalancerDeleteVip, which removes the LBHC from lb.HealthCheck.
  2. OVSDB's garbage collection then deletes the orphaned LBHC object and asynchronously propagates the change to the client cache.
  3. When handleDelSwitchLBRule later runs and queries ListLoadBalancerHealthChecks (via in-memory WhereCache), the LBHC is already gone from cache.
  4. As a result, the vips map is empty → the health-check VIP CRD is never deleted → the VIP holds an IP in the subnet → v4usingIPs = 1 → the subnet finalizer is never removed → subnet stuck in Terminating.

This race was confirmed by controller logs: the clean health check call (removing LBHC from LB) happens 12ms before the SLR deletion is even enqueued.

Fixes

1. e2e test: fix AfterEach cleanup order (test/e2e/kube-ovn/switch_lb_rule/switch_lb_rule.go)

Delete SLRs before stsSvc to ensure handleDelSwitchLBRule runs while the LBHC is still in the OVN NB DB.

2. Controller: add fallback VIP cleanup (pkg/controller/switch_lb_rule.go)

Before deleting the Service, read its ovn.kubernetes.io/logical_switch annotation. If handleDelSwitchLBRule finds no LBHC in cache (i.e. LBHC was already GC'd), use the saved subnet name as a fallback to identify and delete the health-check VIP CRD. This makes the cleanup robust even if the race occurs.

3. Controller: handle IsNotFound on VIP deletion (pkg/controller/switch_lb_rule.go)

When two SLRs share the same subnet and both fall through to the fallback path, both will attempt to delete the same VIP CRD. The second attempt correctly receives IsNotFound which should be ignored rather than treated as an error.

Test plan

  • Existing SLR e2e tests pass without flakiness
  • Verify subnet is fully cleaned up after SLR e2e AfterEach

🤖 Generated with Claude Code

oilbeater and others added 2 commits February 26, 2026 06:57
When the e2e test's AfterEach cleanup deleted stsSvc before the SLRs,
the endpoint_slice controller would remove the LoadBalancerHealthCheck
(LBHC) for the now-empty service. Consequently, handleDelSwitchLBRule()
could not find the LBHC, left the health-check VIP CR in place, and the
VIP kept its IP reserved. This prevented the subnet finalizer from being
removed (v4usingIPs remained 1), causing the subnet to be stuck in
Terminating state and the test to time out.

Two complementary fixes:
1. e2e: reorder AfterEach so both SLRs are deleted before stsSvc, ensuring
   the LBHC is still present when handleDelSwitchLBRule runs.
2. controller: in handleDelSwitchLBRule, read the service's subnet
   annotation before deleting it and use it as a fallback when no LBHC
   is found, so the health-check VIP is always cleaned up even if the
   LBHC was already gone.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…BRule

When multiple SLR controllers process deletion for SLRs sharing the same
subnet (via the fallback path), both may attempt to delete the same VIP
CRD. The second deletion gets an IsNotFound error, which previously caused
an unnecessary error and retry loop.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @oilbeater, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a flaky e2e test failure related to Switch Load Balancer Rules (SLR) where subnets would get stuck in a Terminating state. The core issue was a race condition during cleanup, leading to health-check VIPs not being properly deleted. The changes introduce a more robust cleanup order in e2e tests and implement a fallback mechanism in the controller to ensure VIPs are always cleaned up, along with improved error handling for VIP deletion.

Highlights

  • e2e Test Cleanup Order: The e2e test cleanup order was adjusted to delete Switch Load Balancer Rules (SLRs) before the associated service, preventing a race condition where the service's health check might be removed prematurely.
  • Controller Fallback VIP Cleanup: A fallback mechanism was implemented in the controller to ensure health-check VIP CRDs are cleaned up even if the Load Balancer Health Check (LBHC) is already gone due to a race condition. This is achieved by pre-reading the subnet annotation from the service.
  • Robust VIP Deletion: The controller's VIP deletion logic was enhanced to gracefully handle IsNotFound errors, preventing failures when multiple SLRs attempt to delete the same VIP CRD.
Changelog
  • pkg/controller/switch_lb_rule.go
    • Added logic to retrieve and store the logical switch annotation from the service before its deletion, to be used as a fallback for VIP cleanup.
    • Implemented a fallback mechanism to identify and delete health-check VIPs using the stored subnet annotation if no Load Balancer Health Checks are found.
    • Modified the VIP deletion process to ignore IsNotFound errors, making it resilient to concurrent deletion attempts.
  • test/e2e/kube-ovn/switch_lb_rule/switch_lb_rule.go
    • Reordered the cleanup steps in the AfterEach block to ensure Switch Load Balancer Rules are deleted before the associated service.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@dosubot dosubot bot added size:S This PR changes 10-29 lines, ignoring generated files. bug Something isn't working test automation tests labels Feb 26, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a flaky e2e test failure caused by a race condition during resource cleanup by correcting the deletion order in the e2e test's AfterEach block and making the handleDelSwitchLBRule controller logic more robust with a fallback mechanism. However, a critical vulnerability has been identified: the fallback logic trusts the ovn.kubernetes.io/logical_switch annotation on the backing service, which can be manipulated by users with access to the namespace. This allows an attacker to cause the deletion of arbitrary cluster-scoped Vip resources by modifying the service annotation before the SwitchLBRule is deleted. Additionally, enhancing logging for better observability in case of unexpected errors is suggested.

// was already removed because the service was deleted before the SLR).
subnetForVip := ""
if svc, e := c.servicesLister.Services(info.Namespace).Get(name); e == nil {
subnetForVip = svc.Annotations[util.LogicalSwitchAnnotation]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The fallback logic for identifying the VIP to delete relies on the ovn.kubernetes.io/logical_switch annotation of the service. Since this service is located in a user-controlled namespace, an attacker with edit permissions in that namespace could modify this annotation to point to an arbitrary Vip resource name. When the SwitchLBRule is subsequently deleted, the controller will delete the specified Vip resource without verifying its ownership or type, leading to unauthorized deletion of cluster-scoped resources (e.g., VM VIPs). To remediate this, the controller should verify that the Vip resource being deleted is indeed a health-check VIP associated with the current SwitchLBRule, for example by checking its labels or owner references. Additionally, while the current logic correctly handles the case where the service is not found, it silently ignores other potential errors from c.servicesLister.Services().Get(). It would be beneficial for debugging to log a warning if an unexpected error occurs, providing better visibility into potential issues.

@oilbeater oilbeater merged commit 53b0946 into master Feb 26, 2026
73 of 76 checks passed
@oilbeater oilbeater deleted the fix/slr-e2e-vip-cleanup branch February 26, 2026 08:59
oilbeater added a commit that referenced this pull request Feb 27, 2026
…ling

Three E2E tests had flaky patterns identified during analysis of recent
fixes (#6343, #6345, #6347, #6349, #6355, #6358):

node/node.go:
- "should access overlay pods using node ip": RunHostCmdOrDie with no
  retry; OVN join subnet routes may not yet be installed on the host
  network stack when the pod turns Ready. Replace with WaitUntil (30s)
  and increase curl timeout from 2s to 5s.
- "should access overlay services using node ip": same issue plus no
  endpoint readiness wait before the curl. Add endpoint WaitUntil (1m)
  then wrap the connectivity check with WaitUntil (30s).

underlay/underlay.go:
- "should be able to detect conflict vlan subnet": two time.Sleep(10s)
  calls used fixed waits instead of condition-based polling. Replace
  the first sleep with WaitUntil waiting for conflictVlan1 to be
  processed (non-conflicting), and the second with WaitUntil waiting
  for conflictVlan2.Status.Conflict to become true.
- checkU2OFilterOpenFlowExist: manual "for range 3" retry loop with a
  hard 5s sleep. Replace with a deadline-based loop (30s, 2s interval)
  using time.Now().After for a clean timeout boundary.

subnet/subnet.go:
- "should detect MAC address conflict": time.Sleep(2s) before a
  one-shot event list is too short on a loaded cluster. Replace with
  WaitUntil (500ms interval, 15s timeout) polling for the
  AddressConflict Warning event.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
oilbeater added a commit that referenced this pull request Feb 27, 2026
…ling (#6362)

* fix(e2e): replace hard sleeps and unretried checks with WaitUntil polling

Three E2E tests had flaky patterns identified during analysis of recent
fixes (#6343, #6345, #6347, #6349, #6355, #6358):

node/node.go:
- "should access overlay pods using node ip": RunHostCmdOrDie with no
  retry; OVN join subnet routes may not yet be installed on the host
  network stack when the pod turns Ready. Replace with WaitUntil (30s)
  and increase curl timeout from 2s to 5s.
- "should access overlay services using node ip": same issue plus no
  endpoint readiness wait before the curl. Add endpoint WaitUntil (1m)
  then wrap the connectivity check with WaitUntil (30s).

underlay/underlay.go:
- "should be able to detect conflict vlan subnet": two time.Sleep(10s)
  calls used fixed waits instead of condition-based polling. Replace
  the first sleep with WaitUntil waiting for conflictVlan1 to be
  processed (non-conflicting), and the second with WaitUntil waiting
  for conflictVlan2.Status.Conflict to become true.
- checkU2OFilterOpenFlowExist: manual "for range 3" retry loop with a
  hard 5s sleep. Replace with a deadline-based loop (30s, 2s interval)
  using time.Now().After for a clean timeout boundary.

subnet/subnet.go:
- "should detect MAC address conflict": time.Sleep(2s) before a
  one-shot event list is too short on a loaded cluster. Replace with
  WaitUntil (500ms interval, 15s timeout) polling for the
  AddressConflict Warning event.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(e2e): replace per-node sleep with WaitUntil in vlan_subinterfaces test

After patching ProviderNetwork with AutoCreateVlanSubinterfaces=false,
the test verified existing subinterfaces were not deleted by sleeping
5 seconds inside the per-node loop (N nodes × 5s wasted time) and then
doing an immediate assertion.

This has two problems:
1. The sleep runs once per node, wasting N*5s even when the daemon
   reconciles quickly.
2. If the controller deletes a subinterface after the 5s sleep window,
   ExpectTrue produces a false-positive pass.

Replace with WaitUntil (2s interval, 30s timeout) per node: the check
passes on the first poll if subinterfaces are stable (common case), and
retries up to 30s if there is any transient disruption, eliminating
both issues.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(e2e): eliminate race between VIP finalizer addition and WaitToBeReady

Root cause: handleAddVirtualIP called createOrUpdateVipCR to set
Status.V4ip (triggering WaitToBeReady), but the finalizer was only
added later in handleUpdateVirtualIP (triggered by a subsequent
update event). This created a race window where CreateSync could
return a VIP object with V4ip set but without the finalizer.

Fix 1 (controller): In createOrUpdateVipCR's else branch, add the
finalizer atomically in the same Update() call that sets spec/status,
so the VIP is fully initialized in one API operation.

Fix 2 (test framework): Update WaitToBeReady to require both an IP
address AND the controller finalizer before declaring a VIP ready,
ensuring tests only proceed with a fully-initialized VIP.

Fix 3 (test): Add ginkgo.DeferCleanup for testVipName inside the
It block so the VIP is deleted even on test failure, preventing
the AfterEach subnetClient.DeleteSync from timing out.

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>

---------

Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working size:S This PR changes 10-29 lines, ignoring generated files. test automation tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant