-
Notifications
You must be signed in to change notification settings - Fork 5.4k
Description
Describe the bug a clear and concise description of what the bug is.
Installing the kube-state-metrics Helm chart using version 6.3.0 or newer while setting kubeRBACProxy.enabled=true doesn't work. The kube-state-metrics container never gets ready.
What's your helm version?
version.BuildInfo{Version:"v3.18.6", GitCommit:"b76a950f6835474e0906b96c9ec68a2eff3a6430", GitTreeState:"clean", GoVersion:"go1.24.6"}
What's your kubectl version?
Client Version: v1.33.2 Kustomize Version: v5.6.0 Server Version: v1.33.4+k3s1
Which chart?
kube-state-metrics
What's the chart version?
6.3.0 and 6.4.0
What happened?
I installed the chart setting kubeRBACProxy.enabled=true. The kube-state-metrics container never gets ready. I believe this problem was pointed out in the very PR that caused it.
What you expected to happen?
Both containers would get ready, and there'd be no issues with the probes.
How to reproduce it?
The command used to install the chart was:
helm install ksm prometheus-community/kube-state-metrics --version 6.3.0 --set kubeRBACProxy.enabled=trueThe issue can be checked like this:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ksm-kube-state-metrics-7d98c5d4c9-fx4n7 1/2 Running 0 2m1s $ kubectl describe pod ksm-kube-state-metrics-7d98c5d4c9-fx4n7
Name: ksm-kube-state-metrics-7d98c5d4c9-fx4n7
Namespace: default
Priority: 0
Service Account: ksm-kube-state-metrics
Node: lima-rancher-desktop/192.168.5.15
Start Time: Thu, 25 Sep 2025 19:22:31 -0300
Labels: app.kubernetes.io/component=metrics
app.kubernetes.io/instance=ksm
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=kube-state-metrics
app.kubernetes.io/part-of=kube-state-metrics
app.kubernetes.io/version=2.17.0
helm.sh/chart=kube-state-metrics-6.3.0
pod-template-hash=7d98c5d4c9
Annotations: <none>
Status: Running
SeccompProfile: RuntimeDefault
IP: 10.42.0.15
IPs:
IP: 10.42.0.15
Controlled By: ReplicaSet/ksm-kube-state-metrics-7d98c5d4c9
Containers:
kube-state-metrics:
Container ID: docker://8f832b04f1f0f939a7face9384f383094af3017eff71305e98dd501a7a082b2d
Image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.17.0
Image ID: docker-pullable://registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:2bbc915567334b13632bf62c0a97084aff72a36e13c4dabd5f2f11c898c5bacd
Port: <none>
Host Port: <none>
Args:
--host=127.0.0.1
--port=9090
--resources=certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,leases,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,volumeattachments
--telemetry-host=127.0.0.1
--telemetry-port=9091
State: Running
Started: Thu, 25 Sep 2025 19:22:32 -0300
Ready: False
Restart Count: 0
Liveness: http-get https://:http/livez delay=5s timeout=5s period=10s #success=1 #failure=3
Readiness: http-get https://:metrics/readyz delay=5s timeout=5s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4kck7 (ro)
kube-rbac-proxy-http:
Container ID: docker://83a26ca87d7ee761385b0905bca37a816ec0283e4492359d414d6a5a9e6f3524
Image: quay.io/brancz/kube-rbac-proxy:v0.19.1
Image ID: docker-pullable://quay.io/brancz/kube-rbac-proxy@sha256:9f21034731c7c3228611b9d40807f3230ce8ed2b286b913bf2d1e760d8d866fc
Ports: 8080/TCP, 8888/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--ignore-paths=/livez,/readyz
--secure-listen-address=:8080
--upstream=http://127.0.0.1:9090/
--proxy-endpoints-port=8888
--config-file=/etc/kube-rbac-proxy-config/config-file.yaml
State: Running
Started: Thu, 25 Sep 2025 19:22:32 -0300
Ready: True
Restart Count: 0
Readiness: http-get https://:http-healthz/healthz delay=5s timeout=5s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/kube-rbac-proxy-config from kube-rbac-proxy-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4kck7 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-rbac-proxy-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: ksm-kube-state-metrics-rbac-config
Optional: false
kube-api-access-4kck7:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m17s default-scheduler Successfully assigned default/ksm-kube-state-metrics-7d98c5d4c9-fx4n7 to lima-rancher-desktop
Normal Pulled 3m17s kubelet Container image "registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.17.0" already present on machine
Normal Created 3m17s kubelet Created container: kube-state-metrics
Normal Started 3m17s kubelet Started container kube-state-metrics
Normal Pulled 3m17s kubelet Container image "quay.io/brancz/kube-rbac-proxy:v0.19.1" already present on machine
Normal Created 3m17s kubelet Created container: kube-rbac-proxy-http
Normal Started 3m17s kubelet Started container kube-rbac-proxy-http
Warning Unhealthy 79s (x13 over 3m4s) kubelet Readiness probe errored and resulted in unknown state: strconv.Atoi: parsing "metrics": invalid syntax
Warning Unhealthy 77s (x12 over 3m7s) kubelet Liveness probe errored and resulted in unknown state: strconv.Atoi: parsing "http": invalid syntaxEnter the changed values of values.yaml?
kubeRBACProxy:
enabled: trueEnter the command that you execute and failing/misfunctioning.
helm install ksm prometheus-community/kube-state-metrics --version 6.3.0 --set kubeRBACProxy.enabled=trueAnything else we need to know?
No response