Skip to content

[kube-prometheus-stack] Error on ingesting out-of-order samples only when scraping cAdvisor metrics #5483

Open
@mbasha86

Description

@mbasha86

Describe the bug a clear and concise description of what the bug is.

  • After upgrading to prometheus stack helm chart version 68.5.0, we noticed ingesting out-of-order samples errors happening in different k8s clusters only with scrape pool "serviceMonitor/monitoring/prometheus-stack-kube-prom-kubelet/1" during scarping cAdvisor metrics. This out of order sample messages is also happening of different series.

  • Here's an example from our promethues pod logs:

{"time":"2025-03-26T11:51:06.016158945Z","level":"DEBUG","source":"scrape.go:1909","msg":"Out of order sample","component":"scrape manager","scrape_pool":"serviceMonitor/monitoring/prometheus-stack-kube-prom-kubelet/1","target":{},"series":"container_threads{container=\"\",id=\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podxxxx\",image=\"\",name=\"\",namespace=\"XXX\",pod=\"XXX\"}"}
{"time":"2025-03-26T11:51:06.017086726Z","level":"WARN","source":"scrape.go:1882","msg":"Error on ingesting out-of-order samples","component":"scrape manager","scrape_pool":"serviceMonitor/monitoring/prometheus-stack-kube-prom-kubelet/1","target":{},"num_dropped":460}

What's your helm version?

Flux version 2.3.0

What's your kubectl version?

v1.31.4

Which chart?

kube-prometheus-stack

What's the chart version?

68.5.0

What happened?

  • After upgrading to prometheus stack helm chart version 68.5.0, we noticed ingesting out-of-order samples errors happening in different k8s clusters only with scrape pool "serviceMonitor/monitoring/prometheus-stack-kube-prom-kubelet/1" during scarping cAdvisor metrics. This out of order sample messages is also happening of different series.

  • Here's an example from our promethues pod logs:

{"time":"2025-03-26T11:51:06.016158945Z","level":"DEBUG","source":"scrape.go:1909","msg":"Out of order sample","component":"scrape manager","scrape_pool":"serviceMonitor/monitoring/prometheus-stack-kube-prom-kubelet/1","target":{},"series":"container_threads{container=\"\",id=\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podxxxx\",image=\"\",name=\"\",namespace=\"XXX\",pod=\"XXX\"}"}
{"time":"2025-03-26T11:51:06.017086726Z","level":"WARN","source":"scrape.go:1882","msg":"Error on ingesting out-of-order samples","component":"scrape manager","scrape_pool":"serviceMonitor/monitoring/prometheus-stack-kube-prom-kubelet/1","target":{},"num_dropped":460}

What you expected to happen?

  • Fixing this issue

How to reproduce it?

  • upgrade from an old version of prometheus stack helm chart i.e 62.x.x to prometheus stack helm chart version 68.5.0.
  • check prometheus-prometheus-stack-kube-prom-prometheus-0 pod logs

Enter the changed values of values.yaml?

No response

Enter the command that you execute and failing/misfunctioning.

Nothing Fail

Anything else we need to know?

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions