k8sclient: use ServerVersion instead of pod list for kubeconfig validation#1478
k8sclient: use ServerVersion instead of pod list for kubeconfig validation#1478pliurh wants to merge 2 commits intok8snetworkplumbingwg:masterfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request optimizes the kubeconfig validation process during the bootstrap phase. Instead of performing a resource-intensive operation like listing all pods, which can be slow and unnecessary in large Kubernetes clusters, the system now uses a more efficient method of querying the API server's lightweight Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request improves performance by replacing an expensive pod listing operation with a lightweight server version check for kubeconfig validation. This is a beneficial change, especially for large clusters. I have identified one critical issue where an unhandled error could lead to a panic.
| if current != nil { | ||
| cfg = config | ||
| } | ||
| if current != nil { |
There was a problem hiding this comment.
this jumps where we set the config outside of where we were previously validating, so previously if we had some error other than unknown authority we would keep the bootstrap kubeconfig instead of the cert based kubeconfig, now we just override the bootstrap either way.
I think this is okay / maybe even an improvement (if we can use the cert based kubeconfig we should), just wanted to make sure this was your intention @pliurh
There was a problem hiding this comment.
The old implementation is buggy. I did some refactory, PTAL.
…ation Listing all pods across all namespaces during bootstrap is expensive in large clusters and unnecessary since the result is discarded. Use the lightweight /version endpoint to validate connectivity instead. Signed-off-by: Peng Liu <pliu@redhat.com>
Validate the per-node kubeconfig when a current certificate is available and fall back to the bootstrap kubeconfig only when the per-node config is no longer trusted. Also rebuild the derived per-node rest.Config from the reloaded bootstrap config so TLS settings are preserved and refreshed consistently. Signed-off-by: Peng Liu <pliu@redhat.com>
Listing all pods across all namespaces during bootstrap is expensive in large clusters and unnecessary since the result is discarded. Use the lightweight /version endpoint to validate connectivity instead.