fix(services): SLRs could not target VM endpoints#5397
fix(services): SLRs could not target VM endpoints#5397oilbeater merged 1 commit intokubeovn:masterfrom
Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @SkalaNetworks, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request resolves a bug in Kube-OVN where Service Load Balancer Rules (SLRs) and EndpointSlices were unable to correctly target Virtual Machine (VM) endpoints for health checks. The fix involves enhancing the ipPortMapping generation process by introducing logic to correctly derive Logical Switch Port (LSP) names for virt-launcher pods, which represent VMs. This ensures that health checks can accurately probe VM backends, improving the reliability of services backed by VMs.
Highlights
- Bug Fix: VM Endpoint Targeting: This PR addresses a bug where Service Load Balancer Rules (SLRs) and EndpointSlices could not correctly target Virtual Machine (VM) endpoints. The issue stemmed from an incorrect computation of Logical Switch Port (LSP) names for
virt-launcherpods, which represent VMs. - Improved LSP Name Resolution for VMs: Logic has been introduced to accurately identify
virt-launcherpods and extract the actual VM name from their pod name. This ensures that the correct LSP name is generated for VM endpoints, allowing health checks and service monitors to function properly. - Code Refactoring and Test Coverage: The LSP name generation logic has been encapsulated into dedicated helper functions (
getEndpointTargetLSPandgetVmNameFromLauncherPod) for better modularity. Unit tests have been added to validate the new LSP naming scheme for both standard pods and VM launcher pods.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
|
Note: the detection logic for virt-laucher pods doesn't try fetching the definition of the Pod to assert if it has an annotation marking is as a VM. It's pretty simple, but opens a flaw where a user could create a pod named "virt-launcher". I made that choice to prevent unnecessary fetches. I think the Service/LB logic needs a major overhaul anyway, this should be efficient and straightforward for the moment. I still need to deal with multihomed VMs/Pods which use custom provider. The current code naively assumes everything runs in the default provider, but if you use a custom one, your LSPs will have another different naming scheme. |
There was a problem hiding this comment.
Code Review
The code changes introduce a fix to service load balancer functionality for virtual machine endpoints. The approach of creating a dedicated function to derive the logical switch port (LSP) name is clean and improves the code's modularity. There is a critical issue in the new logic for parsing VM names from pod names, which could lead to incorrect behavior in edge cases, and the tests should be expanded to cover these edge cases.
Pull Request Test Coverage Report for Build 15887426971Details
💛 - Coveralls |
Signed-off-by: SkalaNetworks <contact@skala.network>
9d02b07 to
6432fc0
Compare
The
I agree, a refactor is needed. We can leave it as is and optimize it later. |
Signed-off-by: SkalaNetworks <contact@skala.network>
Pull Request
What type of this PR
Examples of user facing changes:
SLRs/EndpointSlices were flawed when generating the ipPortMapping for LoadBalancers. The mapping is used by healthchecks to verify that each backend IP is responsive by probing the corresponding LSP.
The name of LSPs was computed by the code without the "ovs.PodNameToPortName", but even if it was, it wouldn't have been enough.
When an SLR/Service/bunch of EndpointSlices are targetting "normal" pods, the name of the LSP was correctly generated (podName.namespace). The problem is that this naming scheme was followed for virt-launcher pods, which represent VMs.
But Kube-OVN doesn't use the same naming scheme for "normal" pods and for VMs. So using the podName.namespace template for virt-launchers would generate a faulty LSP name. VMs use a more simple and more stable scheme which is vmName.namespace.
The fix identifies virt-launcher pods and computes the correct LSP name for them, in turn generating the right mapping.
Service monitors should be correctly generated from there.
Which issue(s) this PR fixes
Fixes #5337