Description
Describe the bug a clear and concise description of what the bug is.
-
After upgrading to prometheus stack helm chart version 68.5.0, we noticed ingesting out-of-order samples errors happening in different k8s clusters only with scrape pool "serviceMonitor/monitoring/prometheus-stack-kube-prom-kubelet/1" during scarping cAdvisor metrics. This out of order sample messages is also happening of different series.
-
Here's an example from our promethues pod logs:
{"time":"2025-03-26T11:51:06.016158945Z","level":"DEBUG","source":"scrape.go:1909","msg":"Out of order sample","component":"scrape manager","scrape_pool":"serviceMonitor/monitoring/prometheus-stack-kube-prom-kubelet/1","target":{},"series":"container_threads{container=\"\",id=\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podxxxx\",image=\"\",name=\"\",namespace=\"XXX\",pod=\"XXX\"}"}
{"time":"2025-03-26T11:51:06.017086726Z","level":"WARN","source":"scrape.go:1882","msg":"Error on ingesting out-of-order samples","component":"scrape manager","scrape_pool":"serviceMonitor/monitoring/prometheus-stack-kube-prom-kubelet/1","target":{},"num_dropped":460}
What's your helm version?
Flux version 2.3.0
What's your kubectl version?
v1.31.4
Which chart?
kube-prometheus-stack
What's the chart version?
68.5.0
What happened?
-
After upgrading to prometheus stack helm chart version 68.5.0, we noticed ingesting out-of-order samples errors happening in different k8s clusters only with scrape pool "serviceMonitor/monitoring/prometheus-stack-kube-prom-kubelet/1" during scarping cAdvisor metrics. This out of order sample messages is also happening of different series.
-
Here's an example from our promethues pod logs:
{"time":"2025-03-26T11:51:06.016158945Z","level":"DEBUG","source":"scrape.go:1909","msg":"Out of order sample","component":"scrape manager","scrape_pool":"serviceMonitor/monitoring/prometheus-stack-kube-prom-kubelet/1","target":{},"series":"container_threads{container=\"\",id=\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podxxxx\",image=\"\",name=\"\",namespace=\"XXX\",pod=\"XXX\"}"}
{"time":"2025-03-26T11:51:06.017086726Z","level":"WARN","source":"scrape.go:1882","msg":"Error on ingesting out-of-order samples","component":"scrape manager","scrape_pool":"serviceMonitor/monitoring/prometheus-stack-kube-prom-kubelet/1","target":{},"num_dropped":460}
What you expected to happen?
- Fixing this issue
How to reproduce it?
- upgrade from an old version of prometheus stack helm chart i.e 62.x.x to prometheus stack helm chart version 68.5.0.
- check prometheus-prometheus-stack-kube-prom-prometheus-0 pod logs
Enter the changed values of values.yaml?
No response
Enter the command that you execute and failing/misfunctioning.
Nothing Fail
Anything else we need to know?
No response