Description
Is there an existing issue for this?
- I have searched the existing issues
Current Behavior
The kubectl auth plugin we use, https://github.com/int128/kubelogin, keeps a cache under ~/.kube/cache/oidc-login
.
The file names there are the cache keys, and those keys are the hashes of the args of the user
in the KUBECONFIG file.
For example, in kubeconfig, let's say we have two users for cluster mycluster
, [email protected]
and [email protected]
:
- name: [email protected]
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://omni.example.org/oidc
- --oidc-client-id=native
- --oidc-extra-scope=cluster:mycluster
command: kubectl
env: null
provideClusterInfo: false
- name: [email protected]
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://omni.example.org/oidc
- --oidc-client-id=native
- --oidc-extra-scope=cluster:mycluster
command: kubectl
env: null
provideClusterInfo: false
These users will end up in the same cache key, which means changing the context used by kubectl, assuming that it will authenticate as a different user will not work as expected.
This causes confusion, and the way to get around it atm is to run kubectl oidc-login clean
or rm -rf ~/.kube/cache/oidc-login
.
This is not ideal, as it is still confusing to authenticate as the unexpected user, and clearing the cache means we need to re-authenticate.
In other words, the user
s we define in the KUBECONFIGs we generate are unique by instance+cluster
(--oidc-issuer-url + --oidc-extra-scope=cluster:
), not instance+cluster+user
.
We should look into whether we want to include user information in the oidc-login input one way or another - if we do, we also need to verify it on the server side during authentication.
Expected Behavior
Multiple user definitions against the same omni instance and the same cluster live happily and separately in my kubeconfig.
Steps To Reproduce
- Create a cluster.
- Do
omnictl kubeconfig --cluster=***
- Either switch Omni context to use another Omni user OR simply edit KUBECONFIG file to create a duplicate user & context with a different name.
- Try to use the new user - do
kubectl get pod
- It will authenticate as the previous user.
What browsers are you seeing the problem on?
No response
Anything else?
No response