Component(s)
prometheus.operator.servicemonitors
Request
What we need
A way for Alloy's prometheus.operator.servicemonitors component to expose ServiceMonitor metadata labels (e.g. o11y.example.com/cohort) as relabelable labels on scrape targets — analogous to how targetLabels works for Service labels today.
Proposed solution
A new field in prometheus.operator.servicemonitors (or equivalent support in the ServiceMonitor spec) such as serviceMonitorTargetLabels that copies specified labels from the ServiceMonitor's own metadata onto every scrape target it generates — similar to how targetLabels promotes Service labels into metrics.
Use case
We use Grafana Alloy with prometheus.operator.servicemonitors to scrape metrics, forwarding to the backend storage. We have multiple ServiceMonitors across different cohorts, each identified by the label o11y.example.com/cohort on the ServiceMonitor metadata (e.g. telemetryInfra, etc.).
Currently, we have 15 cohorts (and this number may continue to grow).
For each Alloy instance:
- 15 watches for ServiceMonitor
- 15 watches for PodMonitor
(Not counting additional indirect watches)
Since Alloy runs as a DaemonSet and our cluster has ~500 nodes, that results in:
(15 + 15) × 500 = 15,000 watches.
This creates significant pressure on the Kubernetes API server.
The problem
We want the o11y.example.com/cohort label to be propagated onto scraped metrics so we can filter/aggregate by cohort in queries. Currently there is no mechanism to do this because:
__meta_kubernetes_servicemonitor_* labels don't exist — ServiceMonitor is a CRD, not a native Kubernetes SD object, so Prometheus-style service discovery never runs against it
targetLabels in the ServiceMonitor spec only copies labels from the Service object, not from the ServiceMonitor itself
The o11y.example.com/cohort label is only on the ServiceMonitor — it is not present on the target Services or Pods
We do not have access to the upstream Helm charts that manage the target Services, so we cannot add the label there
Tip
React with 👍 if this issue is important to you.
Component(s)
prometheus.operator.servicemonitors
Request
What we need
A way for Alloy's
prometheus.operator.servicemonitorscomponent to exposeServiceMonitormetadata labels (e.g.o11y.example.com/cohort) as relabelable labels on scrape targets — analogous to howtargetLabelsworks forServicelabels today.Proposed solution
A new field in
prometheus.operator.servicemonitors(or equivalent support in theServiceMonitorspec) such asserviceMonitorTargetLabelsthat copies specified labels from theServiceMonitor's own metadata onto every scrape target it generates — similar to howtargetLabelspromotesServicelabels into metrics.Use case
We use Grafana Alloy with
prometheus.operator.servicemonitorsto scrape metrics, forwarding to the backend storage. We have multipleServiceMonitorsacross different cohorts, each identified by the labelo11y.example.com/cohorton theServiceMonitormetadata (e.g.telemetryInfra, etc.).Currently, we have 15 cohorts (and this number may continue to grow).
For each Alloy instance:
(Not counting additional indirect watches)
Since Alloy runs as a
DaemonSetand our cluster has ~500 nodes, that results in:(15 + 15) × 500 = 15,000watches.This creates significant pressure on the Kubernetes API server.
The problem
We want the
o11y.example.com/cohortlabel to be propagated onto scraped metrics so we can filter/aggregate by cohort in queries. Currently there is no mechanism to do this because:__meta_kubernetes_servicemonitor_*labels don't exist —ServiceMonitoris a CRD, not a native Kubernetes SD object, so Prometheus-style service discovery never runs against ittargetLabelsin theServiceMonitorspec only copies labels from theServiceobject, not from theServiceMonitoritselfThe
o11y.example.com/cohortlabel is only on theServiceMonitor— it is not present on the targetServicesorPodsWe do not have access to the upstream Helm charts that manage the target Services, so we cannot add the label there
Tip
React with 👍 if this issue is important to you.