-
Notifications
You must be signed in to change notification settings - Fork 279
backend: Fix missing k8s token when in "in-cluster" mode #2477
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
088617f
to
d337ebe
Compare
Thanks for this fix. It looks like there's a formatting issue. Maybe this will fix it: |
Ah ok, thanks for the hint. I fixed it 🙂 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jamowei , left a comment. Also, can you squash these give a title + description to the commit?
The title should be in imperative mood (e.g. 'Fix', not 'Fixed' or 'Fixes') and have a backend:
prefix.
@joaquimrocha Ok now? |
1dc4c1c
to
9ca13c4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jamowei The 1st commit is now in the right format, thanks! It still seems though like there are several commits, including a merge commit and they are not all related to the changes. Please squash these together, and maybe remove the changes to the initial logging, unless you want to propose those in a different commit. I asked some colleagues for help.
|
||
// load kubeConfig clusters | ||
err := kubeconfig.LoadAndStoreKubeConfigs(config.kubeConfigStore, kubeConfigPath, kubeconfig.KubeConfig) | ||
if err != nil { | ||
if err != nil && !config.useInCluster { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@illume I think you did some changes related to this in the past, in case you still remember and this is related.
inClusterAuthInfo := &api.AuthInfo{ | ||
Token: clusterConfig.BearerToken, | ||
TokenFile: clusterConfig.BearerTokenFile, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yolossn , can you help me understand if this is fine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change will make the in-cluster deployments open without any kind of authentication. This is a major change in the existing functionality.
BTW, each commit (although I think this PR should just be one) needs to be signed off. That's done by running |
…unning inside k8s cluster with `in-cluster` mode enabled Signed-off-by: Jan <[email protected]>
I reran this job that failed... https://github.com/headlamp-k8s/headlamp/actions/runs/11580255626/job/32326141784?pr=2477 (Maybe it was an intermittent error.) |
hrmmm. The Build Container and test job still seems to be failing. I'll try it again later, and if it's still failing I'll look into it further. |
@illume @joaquimrocha Is there any update? We also urgently need this function! |
@mdoerries We are now prioritizing this. |
I think you have to fix the E2E tests with playwright. There it looks like a token is expected, when running in "in-cluster" mode... Which in my opinion is wrong 😉 |
@jamowei , I got @yolossn (one of our team's engineers) to take a look and he's raised the concern that getting the token into the config like that, when in-cluster, may lead to a security problem because you are going to configure the access for all users that way, i.e. if Headlamp successfully accesses the cluster, it will not prompt the user for a token, and this in a shared environment can be a security problem, at least for those who do not expect this behavior. AFAIU, you can accomplish the same by using a kubeconfig with the cluster's data (including token) and using the |
@jamowei Given my previous comment, I don't think we can merge this PR. At least not as is, we'd require a flag like |
Ok, I understand this point und I will try the |
I would name it What do you think @joaquimrocha @mdoerries ? |
I think we need an arg name that clearly states what it is and what context, because maybe some admins would think use-service-account is for prompting users on the client side to use an SA token. I am not 100% happy with my previous suggestion for a name, but something along those lines, where it's clearly what we are turning on/off would be better IMO. |
Unknown CLA label state. Rechecking for CLA labels. Send feedback to sig-contributor-experience at kubernetes/community. /check-cla |
|
This PR fixes the issue #1826 that you have to provide an extra k8s token in the UI although
headlamp
is running inside a k8s cluster within-cluster
mode enabled.This are the logs you get, which are also confusing then:
The problem was that the
kubeconfig.GetInClusterContext()
function did not use the service-token which gets mounted to the pod when creating the cluster-credentials.I tested the fix with minikube and now I don't have to pass an extra token when opening the UI 🙂