fix: fix SLR e2e flaky failure caused by VIP not being cleaned up#6347
fix: fix SLR e2e flaky failure caused by VIP not being cleaned up#6347
Conversation
When the e2e test's AfterEach cleanup deleted stsSvc before the SLRs, the endpoint_slice controller would remove the LoadBalancerHealthCheck (LBHC) for the now-empty service. Consequently, handleDelSwitchLBRule() could not find the LBHC, left the health-check VIP CR in place, and the VIP kept its IP reserved. This prevented the subnet finalizer from being removed (v4usingIPs remained 1), causing the subnet to be stuck in Terminating state and the test to time out. Two complementary fixes: 1. e2e: reorder AfterEach so both SLRs are deleted before stsSvc, ensuring the LBHC is still present when handleDelSwitchLBRule runs. 2. controller: in handleDelSwitchLBRule, read the service's subnet annotation before deleting it and use it as a fallback when no LBHC is found, so the health-check VIP is always cleaned up even if the LBHC was already gone. Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…BRule When multiple SLR controllers process deletion for SLRs sharing the same subnet (via the fallback path), both may attempt to delete the same VIP CRD. The second deletion gets an IsNotFound error, which previously caused an unnecessary error and retry loop. Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Summary of ChangesHello @oilbeater, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a flaky e2e test failure related to Switch Load Balancer Rules (SLR) where subnets would get stuck in a Terminating state. The core issue was a race condition during cleanup, leading to health-check VIPs not being properly deleted. The changes introduce a more robust cleanup order in e2e tests and implement a fallback mechanism in the controller to ensure VIPs are always cleaned up, along with improved error handling for VIP deletion. Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request addresses a flaky e2e test failure caused by a race condition during resource cleanup by correcting the deletion order in the e2e test's AfterEach block and making the handleDelSwitchLBRule controller logic more robust with a fallback mechanism. However, a critical vulnerability has been identified: the fallback logic trusts the ovn.kubernetes.io/logical_switch annotation on the backing service, which can be manipulated by users with access to the namespace. This allows an attacker to cause the deletion of arbitrary cluster-scoped Vip resources by modifying the service annotation before the SwitchLBRule is deleted. Additionally, enhancing logging for better observability in case of unexpected errors is suggested.
| // was already removed because the service was deleted before the SLR). | ||
| subnetForVip := "" | ||
| if svc, e := c.servicesLister.Services(info.Namespace).Get(name); e == nil { | ||
| subnetForVip = svc.Annotations[util.LogicalSwitchAnnotation] |
There was a problem hiding this comment.
The fallback logic for identifying the VIP to delete relies on the ovn.kubernetes.io/logical_switch annotation of the service. Since this service is located in a user-controlled namespace, an attacker with edit permissions in that namespace could modify this annotation to point to an arbitrary Vip resource name. When the SwitchLBRule is subsequently deleted, the controller will delete the specified Vip resource without verifying its ownership or type, leading to unauthorized deletion of cluster-scoped resources (e.g., VM VIPs). To remediate this, the controller should verify that the Vip resource being deleted is indeed a health-check VIP associated with the current SwitchLBRule, for example by checking its labels or owner references. Additionally, while the current logic correctly handles the case where the service is not found, it silently ignores other potential errors from c.servicesLister.Services().Get(). It would be beneficial for debugging to log a warning if an unexpected error occurs, providing better visibility into potential issues.
…ling Three E2E tests had flaky patterns identified during analysis of recent fixes (#6343, #6345, #6347, #6349, #6355, #6358): node/node.go: - "should access overlay pods using node ip": RunHostCmdOrDie with no retry; OVN join subnet routes may not yet be installed on the host network stack when the pod turns Ready. Replace with WaitUntil (30s) and increase curl timeout from 2s to 5s. - "should access overlay services using node ip": same issue plus no endpoint readiness wait before the curl. Add endpoint WaitUntil (1m) then wrap the connectivity check with WaitUntil (30s). underlay/underlay.go: - "should be able to detect conflict vlan subnet": two time.Sleep(10s) calls used fixed waits instead of condition-based polling. Replace the first sleep with WaitUntil waiting for conflictVlan1 to be processed (non-conflicting), and the second with WaitUntil waiting for conflictVlan2.Status.Conflict to become true. - checkU2OFilterOpenFlowExist: manual "for range 3" retry loop with a hard 5s sleep. Replace with a deadline-based loop (30s, 2s interval) using time.Now().After for a clean timeout boundary. subnet/subnet.go: - "should detect MAC address conflict": time.Sleep(2s) before a one-shot event list is too short on a loaded cluster. Replace with WaitUntil (500ms interval, 15s timeout) polling for the AddressConflict Warning event. Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ling (#6362) * fix(e2e): replace hard sleeps and unretried checks with WaitUntil polling Three E2E tests had flaky patterns identified during analysis of recent fixes (#6343, #6345, #6347, #6349, #6355, #6358): node/node.go: - "should access overlay pods using node ip": RunHostCmdOrDie with no retry; OVN join subnet routes may not yet be installed on the host network stack when the pod turns Ready. Replace with WaitUntil (30s) and increase curl timeout from 2s to 5s. - "should access overlay services using node ip": same issue plus no endpoint readiness wait before the curl. Add endpoint WaitUntil (1m) then wrap the connectivity check with WaitUntil (30s). underlay/underlay.go: - "should be able to detect conflict vlan subnet": two time.Sleep(10s) calls used fixed waits instead of condition-based polling. Replace the first sleep with WaitUntil waiting for conflictVlan1 to be processed (non-conflicting), and the second with WaitUntil waiting for conflictVlan2.Status.Conflict to become true. - checkU2OFilterOpenFlowExist: manual "for range 3" retry loop with a hard 5s sleep. Replace with a deadline-based loop (30s, 2s interval) using time.Now().After for a clean timeout boundary. subnet/subnet.go: - "should detect MAC address conflict": time.Sleep(2s) before a one-shot event list is too short on a loaded cluster. Replace with WaitUntil (500ms interval, 15s timeout) polling for the AddressConflict Warning event. Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(e2e): replace per-node sleep with WaitUntil in vlan_subinterfaces test After patching ProviderNetwork with AutoCreateVlanSubinterfaces=false, the test verified existing subinterfaces were not deleted by sleeping 5 seconds inside the per-node loop (N nodes × 5s wasted time) and then doing an immediate assertion. This has two problems: 1. The sleep runs once per node, wasting N*5s even when the daemon reconciles quickly. 2. If the controller deletes a subinterface after the 5s sleep window, ExpectTrue produces a false-positive pass. Replace with WaitUntil (2s interval, 30s timeout) per node: the check passes on the first poll if subinterfaces are stable (common case), and retries up to 30s if there is any transient disruption, eliminating both issues. Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(e2e): eliminate race between VIP finalizer addition and WaitToBeReady Root cause: handleAddVirtualIP called createOrUpdateVipCR to set Status.V4ip (triggering WaitToBeReady), but the finalizer was only added later in handleUpdateVirtualIP (triggered by a subsequent update event). This created a race window where CreateSync could return a VIP object with V4ip set but without the finalizer. Fix 1 (controller): In createOrUpdateVipCR's else branch, add the finalizer atomically in the same Update() call that sets spec/status, so the VIP is fully initialized in one API operation. Fix 2 (test framework): Update WaitToBeReady to require both an IP address AND the controller finalizer before declaring a VIP ready, ensuring tests only proceed with a fully-initialized VIP. Fix 3 (test): Add ginkgo.DeferCleanup for testVipName inside the It block so the VIP is deleted even on test failure, preventing the AfterEach subnetClient.DeleteSync from timing out. Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> --------- Signed-off-by: Mengxin Liu <liumengxinfly@gmail.com> Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Root Cause
The SLR e2e test intermittently fails with subnet stuck in Terminating state (
v4usingIPs: 1). The root cause is a race condition between thestsSvcdeletion and the SLR deletion:stsSvcwas deleted before SLRs. This triggers theendpoint_slicecontroller to callLoadBalancerDeleteVip, which removes the LBHC fromlb.HealthCheck.handleDelSwitchLBRulelater runs and queriesListLoadBalancerHealthChecks(via in-memoryWhereCache), the LBHC is already gone from cache.vipsmap is empty → the health-check VIP CRD is never deleted → the VIP holds an IP in the subnet →v4usingIPs = 1→ the subnet finalizer is never removed → subnet stuck in Terminating.This race was confirmed by controller logs: the
clean health checkcall (removing LBHC from LB) happens 12ms before the SLR deletion is even enqueued.Fixes
1. e2e test: fix AfterEach cleanup order (
test/e2e/kube-ovn/switch_lb_rule/switch_lb_rule.go)Delete SLRs before
stsSvcto ensurehandleDelSwitchLBRuleruns while the LBHC is still in the OVN NB DB.2. Controller: add fallback VIP cleanup (
pkg/controller/switch_lb_rule.go)Before deleting the Service, read its
ovn.kubernetes.io/logical_switchannotation. IfhandleDelSwitchLBRulefinds no LBHC in cache (i.e. LBHC was already GC'd), use the saved subnet name as a fallback to identify and delete the health-check VIP CRD. This makes the cleanup robust even if the race occurs.3. Controller: handle
IsNotFoundon VIP deletion (pkg/controller/switch_lb_rule.go)When two SLRs share the same subnet and both fall through to the fallback path, both will attempt to delete the same VIP CRD. The second attempt correctly receives
IsNotFoundwhich should be ignored rather than treated as an error.Test plan
🤖 Generated with Claude Code