You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+26Lines changed: 26 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,31 @@
1
1
# Changelog
2
2
3
+
## v2.18.0 / 2026-01-11
4
+
5
+
NOTE: This release addresses a regression that caused a panic when new versions were added to existing `CustomResourceDefinition`s. See the post-mortem analysis in <https://github.com/kubernetes/kube-state-metrics/pull/2838> for more details.
6
+
NOTE: `endpointslices` are now part of the default resources exposed as metrics. `endpoints` is deprecated and needs to be manually activated through the `--resources` flag. See <https://github.com/kubernetes/kube-state-metrics/pull/2659> for more details.
7
+
8
+
* This release builds with Go `v1.25.6`
9
+
* This release builds with `k8s.io/client-go`: `v0.34.3`
10
+
11
+
*[CHANGE] Replace endpoints with endpointslices as default resource by @mrueg in <https://github.com/kubernetes/kube-state-metrics/pull/2659>
12
+
*[BUGFIX] Fix regression: React on WATCH updates for CRD informer by @rexagod in <https://github.com/kubernetes/kube-state-metrics/pull/2838>
13
+
*[BUGFIX] Deduplicate tolerations when generating `kube_pod_tolerations` by @bhope in <https://github.com/kubernetes/kube-state-metrics/pull/2835>
14
+
*[FEATURE] Allow filtering resources via URL parameters by @mrueg in <https://github.com/kubernetes/kube-state-metrics/pull/2844>
15
+
*[FEATURE] Add `kube_job_status_ready` metric by @nmn3m in <https://github.com/kubernetes/kube-state-metrics/pull/2771>
16
+
*[FEATURE] Add `kube_deployment_owner` metric by @nmn3m in <https://github.com/kubernetes/kube-state-metrics/pull/2782>
17
+
*[FEATURE] Add `kube_deployment_status_replicas_terminating` and `kube_replicaset_status_terminating_replicas` metrics by @atiratree in <https://github.com/kubernetes/kube-state-metrics/pull/2708>
18
+
*[FEATURE] Promote CronJob, HPA, and Pod metrics from BETA to STABLE by @nmn3m in <https://github.com/kubernetes/kube-state-metrics/pull/2798>
19
+
*[FEATURE] Promote StatefulSet metrics to STABLE by @yasicar in <https://github.com/kubernetes/kube-state-metrics/pull/2783>
20
+
*[FEATURE] Add info metric for client-go version by @fpetkovski in <https://github.com/kubernetes/kube-state-metrics/pull/2739>
21
+
*[FEATURE] Warn on probe failing by @mickael-carl in <https://github.com/kubernetes/kube-state-metrics/pull/2808>
22
+
*[ENHANCEMENT] Add `failed` condition to `kube_certificatesigningrequest_condition` by @ksmiley in <https://github.com/kubernetes/kube-state-metrics/pull/2841>
23
+
*[ENHANCEMENT] Reduce allocations when creating metric families by @fpetkovski in <https://github.com/kubernetes/kube-state-metrics/pull/2807>
24
+
*[ENHANCEMENT] Bump to kubernetes 1.34 by @mrueg in <https://github.com/kubernetes/kube-state-metrics/pull/2785>
25
+
*[ENHANCEMENT] Bump exporter-toolkit by @mrueg in <https://github.com/kubernetes/kube-state-metrics/pull/2770>
26
+
*[ENHANCEMENT] Replace gojsontoyaml with gojq by @mrueg in <https://github.com/kubernetes/kube-state-metrics/pull/2660>
27
+
*[ENHANCEMENT] Split benchmarks by @mrueg in <https://github.com/kubernetes/kube-state-metrics/pull/2759>
It is a cluster level component which periodically scrapes metrics from all
195
+
Kubernetes nodes served by Kubelet through Metrics API. The metrics are
196
+
aggregated, stored in memory and served in [Metrics API
197
+
format](https://git.k8s.io/metrics/pkg/apis/metrics/v1alpha1/types.go). The
198
+
metrics-server stores the latest values only and is not responsible for
199
+
forwarding metrics to third-party destinations.
200
+
201
+
kube-state-metrics is focused on generating completely new metrics from
202
+
Kubernetes' object state (e.g. metrics based on deployments, replica sets,
203
+
etc.). It holds an entire snapshot of Kubernetes state in memory and
204
+
continuously generates new metrics based off of it. And just like the
205
+
metrics-server it too is not responsible for exporting its metrics anywhere.
206
+
207
+
Having kube-state-metrics as a separate project also enables access to these
208
+
metrics from monitoring systems such as Prometheus.
209
+
187
210
### Scaling kube-state-metrics
188
211
189
212
#### Resource recommendation
@@ -198,7 +221,7 @@ As a general rule, you should allocate:
198
221
199
222
Note that if CPU limits are set too low, kube-state-metrics' internal queues will not be able to be worked off quickly enough, resulting in increased memory consumption as the queue length grows. If you experience problems resulting from high memory allocation or CPU throttling, try increasing the CPU limits.
200
223
201
-
### Latency
224
+
####Latency
202
225
203
226
In a 100 node cluster scaling test the latency numbers were as follows:
204
227
@@ -208,34 +231,11 @@ In a 100 node cluster scaling test the latency numbers were as follows:
208
231
"Perc99": 906666666 ns.
209
232
```
210
233
211
-
### A note on costing
234
+
####A note on costing
212
235
213
236
By default, kube-state-metrics exposes several metrics for events across your cluster. If you have a large number of frequently-updating resources on your cluster, you may find that a lot of data is ingested into these metrics. This can incur high costs on some cloud providers. Please take a moment to [configure what metrics you'd like to expose](docs/developer/cli-arguments.md), as well as consult the documentation for your Kubernetes environment in order to avoid unexpectedly high costs.
214
237
215
-
### kube-state-metrics vs. metrics-server
216
-
217
-
The [metrics-server](https://github.com/kubernetes-incubator/metrics-server)
218
-
is a project that has been inspired by
219
-
[Heapster](https://github.com/kubernetes-retired/heapster) and is implemented
220
-
to serve the goals of core metrics pipelines in [Kubernetes monitoring
It is a cluster level component which periodically scrapes metrics from all
223
-
Kubernetes nodes served by Kubelet through Metrics API. The metrics are
224
-
aggregated, stored in memory and served in [Metrics API
225
-
format](https://git.k8s.io/metrics/pkg/apis/metrics/v1alpha1/types.go). The
226
-
metrics-server stores the latest values only and is not responsible for
227
-
forwarding metrics to third-party destinations.
228
-
229
-
kube-state-metrics is focused on generating completely new metrics from
230
-
Kubernetes' object state (e.g. metrics based on deployments, replica sets,
231
-
etc.). It holds an entire snapshot of Kubernetes state in memory and
232
-
continuously generates new metrics based off of it. And just like the
233
-
metrics-server it too is not responsible for exporting its metrics anywhere.
234
-
235
-
Having kube-state-metrics as a separate project also enables access to these
236
-
metrics from monitoring systems such as Prometheus.
237
-
238
-
### Horizontal sharding
238
+
#### Horizontal sharding
239
239
240
240
In order to shard kube-state-metrics horizontally, some automated sharding capabilities have been implemented. It is configured with the following flags:
Other metrics can be sharded via [Horizontal sharding](#horizontal-sharding).
306
306
307
+
#### Resource Filtering
308
+
309
+
The `/metrics` endpoint supports filtering by resource type using the `resources` query parameter. This allows you to scrape only the metrics for specific Kubernetes resources, which can be useful for reducing the amount of data scraped or for creating separate scraping jobs for different resource types.
Multiple resources can be specified as a comma-separated list, or by providing the `resources` parameter multiple times.
315
+
316
+
You can also exclude specific resources using the `exclude_resources` query parameter. This is useful if you want to scrape all metrics except for a few specific ones.
If both `resources` and `exclude_resources` are provided, the `resources` parameter acts as an allowlist, and `exclude_resources` acts as a denylist, filtering out any resources specified in the `exclude_resources` parameter from the allowed resources.
322
+
The exclude_resources takes precedence here and you can only filter on resources that are enabled in kube-state-metrics.
323
+
307
324
### Setup
308
325
309
326
Install this project to your `$GOPATH` using `go get`:
It is a cluster level component which periodically scrapes metrics from all
196
+
Kubernetes nodes served by Kubelet through Metrics API. The metrics are
197
+
aggregated, stored in memory and served in [Metrics API
198
+
format](https://git.k8s.io/metrics/pkg/apis/metrics/v1alpha1/types.go). The
199
+
metrics-server stores the latest values only and is not responsible for
200
+
forwarding metrics to third-party destinations.
201
+
202
+
kube-state-metrics is focused on generating completely new metrics from
203
+
Kubernetes' object state (e.g. metrics based on deployments, replica sets,
204
+
etc.). It holds an entire snapshot of Kubernetes state in memory and
205
+
continuously generates new metrics based off of it. And just like the
206
+
metrics-server it too is not responsible for exporting its metrics anywhere.
207
+
208
+
Having kube-state-metrics as a separate project also enables access to these
209
+
metrics from monitoring systems such as Prometheus.
210
+
188
211
### Scaling kube-state-metrics
189
212
190
213
#### Resource recommendation
@@ -199,7 +222,7 @@ As a general rule, you should allocate:
199
222
200
223
Note that if CPU limits are set too low, kube-state-metrics' internal queues will not be able to be worked off quickly enough, resulting in increased memory consumption as the queue length grows. If you experience problems resulting from high memory allocation or CPU throttling, try increasing the CPU limits.
201
224
202
-
### Latency
225
+
#### Latency
203
226
204
227
In a 100 node cluster scaling test the latency numbers were as follows:
205
228
@@ -209,34 +232,11 @@ In a 100 node cluster scaling test the latency numbers were as follows:
209
232
"Perc99": 906666666 ns.
210
233
```
211
234
212
-
### A note on costing
235
+
#### A note on costing
213
236
214
237
By default, kube-state-metrics exposes several metrics for events across your cluster. If you have a large number of frequently-updating resources on your cluster, you may find that a lot of data is ingested into these metrics. This can incur high costs on some cloud providers. Please take a moment to [configure what metrics you'd like to expose](docs/developer/cli-arguments.md), as well as consult the documentation for your Kubernetes environment in order to avoid unexpectedly high costs.
215
238
216
-
### kube-state-metrics vs. metrics-server
217
-
218
-
The [metrics-server](https://github.com/kubernetes-incubator/metrics-server)
219
-
is a project that has been inspired by
220
-
[Heapster](https://github.com/kubernetes-retired/heapster) and is implemented
221
-
to serve the goals of core metrics pipelines in [Kubernetes monitoring
It is a cluster level component which periodically scrapes metrics from all
224
-
Kubernetes nodes served by Kubelet through Metrics API. The metrics are
225
-
aggregated, stored in memory and served in [Metrics API
226
-
format](https://git.k8s.io/metrics/pkg/apis/metrics/v1alpha1/types.go). The
227
-
metrics-server stores the latest values only and is not responsible for
228
-
forwarding metrics to third-party destinations.
229
-
230
-
kube-state-metrics is focused on generating completely new metrics from
231
-
Kubernetes' object state (e.g. metrics based on deployments, replica sets,
232
-
etc.). It holds an entire snapshot of Kubernetes state in memory and
233
-
continuously generates new metrics based off of it. And just like the
234
-
metrics-server it too is not responsible for exporting its metrics anywhere.
235
-
236
-
Having kube-state-metrics as a separate project also enables access to these
237
-
metrics from monitoring systems such as Prometheus.
238
-
239
-
### Horizontal sharding
239
+
#### Horizontal sharding
240
240
241
241
In order to shard kube-state-metrics horizontally, some automated sharding capabilities have been implemented. It is configured with the following flags:
Other metrics can be sharded via [Horizontal sharding](#horizontal-sharding).
307
307
308
+
#### Resource Filtering
309
+
310
+
The `/metrics` endpoint supports filtering by resource type using the `resources` query parameter. This allows you to scrape only the metrics for specific Kubernetes resources, which can be useful for reducing the amount of data scraped or for creating separate scraping jobs for different resource types.
Multiple resources can be specified as a comma-separated list, or by providing the `resources` parameter multiple times.
316
+
317
+
You can also exclude specific resources using the `exclude_resources` query parameter. This is useful if you want to scrape all metrics except for a few specific ones.
If both `resources`and`exclude_resources` are provided, the `resources` parameter acts as an allowlist, and`exclude_resources` acts as a denylist, filtering out any resources specified in the `exclude_resources` parameter from the allowed resources.
323
+
The exclude_resources takes precedence here and you can only filter on resources that are enabled in kube-state-metrics.
324
+
308
325
### Setup
309
326
310
327
Install this project to your `$GOPATH` using `go get`:
0 commit comments