You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[processor/k8sattributesprocessor] Add support for missing k8s.cronjob.uid (open-telemetry#42641)
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
#### Description
This PR adds support to expose `k8s.cronjob.uid` as resource metadata
when a `Job` is owned by a `CronJob`.
<!-- Issue number (e.g. open-telemetry#1234) or full URL to issue, if applicable. -->
#### Link to tracking issue
Fixesopen-telemetry#42557
<!--Describe what testing was performed and which tests were added.-->
#### Testing
Local tested with `telemetrygen` and is working as expected.
```
[pod/k8sevents-receiver-opentelemetry-collector-6fd9966559-brlb6/opentelemetry-collector] {"level":"debug","ts":"2025-09-11T16:29:11.588Z","caller":"[email protected]/processor.go:159","msg":"getting the pod","resource":{"service.instance.id":"9631e38b-aec3-439f-8178-d96fc8368e1e","service.name":"otelcontribcol","service.version":"0.135.0-dev"},"otelcol.component.id":"k8sattributes","otelcol.component.kind":"processor","otelcol.pipeline.id":"traces","otelcol.signal":"traces","pod":{"Name":"otel-log-cronjob-29293469-lw97x","Address":"10.244.0.70","PodUID":"7960681c-5a24-4287-8bea-e2cf506500ee","Attributes":{"k8s.cronjob.name":"otel-log-cronjob","k8s.cronjob.uid":"082b1c42-e393-46bc-9d51-b20a3700d1ab","k8s.job.name":"otel-log-cronjob-29293469","k8s.job.uid":"fbd853b8-7f63-44d8-ace1-8b48c89e3041"},"StartTime":"2025-09-11T16:29:00Z","Ignore":false,"Namespace":"default","NodeName":"","DeploymentUID":"","StatefulSetUID":"","DaemonSetUID":"","JobUID":"fbd853b8-7f63-44d8-ace1-8b48c89e3041","HostNetwork":false,"Containers":{"ByID":null,"ByName":null},"DeletedAt":"0001-01-01T00:00:00Z"}}
[pod/k8sevents-receiver-opentelemetry-collector-6fd9966559-brlb6/opentelemetry-collector] {"level":"info","ts":"2025-09-11T16:29:11.588Z","msg":"Traces","resource":{"service.instance.id":"9631e38b-aec3-439f-8178-d96fc8368e1e","service.name":"otelcontribcol","service.version":"0.135.0-dev"},"otelcol.component.id":"debug","otelcol.component.kind":"exporter","otelcol.signal":"traces","resource spans":1,"spans":2}
[pod/k8sevents-receiver-opentelemetry-collector-6fd9966559-brlb6/opentelemetry-collector] {"level":"info","ts":"2025-09-11T16:29:11.588Z","msg":"ResourceSpans #0\nResource SchemaURL: https://opentelemetry.io/schemas/1.4.0\nResource attributes:\n -> k8s.container.name: Str(telemetrygen)\n -> service.name: Str(telemetrygen)\n -> k8s.pod.ip: Str(10.244.0.70)\n -> k8s.cronjob.name: Str(otel-log-cronjob)\n -> k8s.cronjob.uid: Str(082b1c42-e393-46bc-9d51-b20a3700d1ab)\n -> k8s.job.uid: Str(fbd853b8-7f63-44d8-ace1-8b48c89e3041)\n -> k8s.job.name: Str(otel-log-cronjob-29293469)\nScopeSpans #0\nScopeSpans SchemaURL: \nInstrumentationScope telemetrygen \nSpan #0\n Trace ID : 3c7381c14a37814676b00a7d961cb219\n Parent ID : 4f8780d5148a9c1c\n ID : 17e9da9533dc93ca\n Name : okey-dokey-0\n Kind : Server\n Start time : 2025-09-11 16:29:09.583785469 +0000 UTC\n End time : 2025-09-11 16:29:09.583908469 +0000 UTC\n Status code : Unset\n Status message : \nAttributes:\n -> net.peer.ip: Str(1.2.3.4)\n -> peer.service: Str(telemetrygen-client)\nSpan #1\n Trace ID : 3c7381c14a37814676b00a7d961cb219\n Parent ID : \n ID : 4f8780d5148a9c1c\n Name : lets-go\n Kind : Client\n Start time : 2025-09-11 16:29:09.583785469 +0000 UTC\n End time : 2025-09-11 16:29:09.583908469 +0000 UTC\n Status code : Unset\n Status message : \nAttributes:\n -> net.peer.ip: Str(1.2.3.4)\n -> peer.service: Str(telemetrygen-server)\n","resource":{"service.instance.id":"9631e38b-aec3-439f-8178-d96fc8368e1e","service.name":"otelcontribcol","service.version":"0.135.0-dev"},"otelcol.component.id":"debug","otelcol.component.kind":"exporter","otelcol.signal":"traces"}
```
Added also the tests to guarantee the proper functionality.
---------
Signed-off-by: Paulo Dias <[email protected]>
Copy file name to clipboardExpand all lines: processor/k8sattributesprocessor/README.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ The processor stores the list of running pods and the associated metadata. When
32
32
to the pod from where the datapoint originated, so we can add the relevant pod metadata to the datapoint. By default, it associates the incoming connection IP
33
33
to the Pod IP. But for cases where this approach doesn't work (sending through a proxy, etc.), a custom association rule can be specified.
34
34
35
-
Each association is specified as a list of sources of associations. The maximum number of sources within an association is 4.
35
+
Each association is specified as a list of sources of associations. The maximum number of sources within an association is 4.
36
36
A source is a rule that matches metadata from the datapoint to pod metadata.
37
37
In order to get an association applied, all the sources specified need to match.
38
38
@@ -63,16 +63,16 @@ If Pod association rules are not configured, resources are associated with metad
63
63
64
64
Which metadata to collect is determined by `metadata` configuration that defines list of resource attributes
65
65
to be added. Items in the list called exactly the same as the resource attributes that will be added.
66
-
The following attributes are added by default:
66
+
The following attributes are added by default:
67
67
- k8s.namespace.name
68
68
- k8s.pod.name
69
69
- k8s.pod.uid
70
70
- k8s.pod.start_time
71
71
- k8s.deployment.name
72
72
- k8s.node.name
73
73
74
-
These attributes are also available for the use within association rules by default.
75
-
The `metadata` section can also be extended with additional attributes which, if present in the `metadata` section,
74
+
These attributes are also available for the use within association rules by default.
75
+
The `metadata` section can also be extended with additional attributes which, if present in the `metadata` section,
76
76
are then also available for the use within association rules. Available attributes are:
77
77
- k8s.namespace.name
78
78
- k8s.pod.name
@@ -100,7 +100,7 @@ are then also available for the use within association rules. Available attribut
100
100
- [service.instance.id](https://opentelemetry.io/docs/specs/semconv/non-normative/k8s-attributes/#how-serviceinstanceid-should-be-calculated)(cannot be used for source rules in the pod_association)
101
101
- Any tags extracted from the pod labels and annotations, as described in [extracting attributes from pod labels and annotations](#extracting-attributes-from-pod-labels-and-annotations)
102
102
103
-
Not all the attributes are guaranteed to be added. Only attribute names from `metadata` should be used for
103
+
Not all the attributes are guaranteed to be added. Only attribute names from `metadata` should be used for
104
104
pod_association's `resource_attribute`, because empty or non-existing values will be ignored.
105
105
106
106
Additional container level attributes can be extracted. If a pod contains more than one container,
@@ -204,7 +204,7 @@ the processor associates the received trace to the pod, based on the connection
- `otel_annotations` will translate `resource.opentelemetry.io/foo` to the `foo` resource attribute, etc.
271
271
272
272
```yaml
273
273
extract:
274
-
otel_annotations: true
274
+
otel_annotations: true
275
275
metadata:
276
276
- service.namespace
277
277
- service.name
@@ -306,7 +306,7 @@ k8sattributes:
306
306
- tag_name: app.label.component
307
307
key: app.kubernetes.io/component
308
308
from: pod
309
-
otel_annotations: true
309
+
otel_annotations: true
310
310
pod_association:
311
311
- sources:
312
312
# This rule associates all resources containing the 'k8s.pod.ip' attribute with the matching pods. If this attribute is not present in the resource, this rule will not be able to find the matching pod.
@@ -325,7 +325,7 @@ k8sattributes:
325
325
326
326
## Cluster-scoped RBAC
327
327
328
-
If you'd like to set up the k8sattributesprocessor to receive telemetry from across namespaces, it will need `get`, `watch` and `list` permissions on both `pods` and `namespaces` resources, for all namespaces and pods included in the configured filters. Additionally, when using `k8s.deployment.name` (which is enabled by default) or `k8s.deployment.uid` the processor also needs `get`, `watch` and `list` permissions for `replicasets` resources. When using `k8s.node.uid` or extracting metadata from `node`, the processor needs `get`, `watch` and `list` permissions for `nodes` resources.
328
+
If you'd like to set up the k8sattributesprocessor to receive telemetry from across namespaces, it will need `get`, `watch` and `list` permissions on both `pods` and `namespaces` resources, for all namespaces and pods included in the configured filters. Additionally, when using `k8s.deployment.name` (which is enabled by default) or `k8s.deployment.uid` the processor also needs `get`, `watch` and `list` permissions for `replicasets` resources. When using `k8s.node.uid` or extracting metadata from `node`, the processor needs `get`, `watch` and `list` permissions for `nodes` resources. When using `k8s.cronjob.uid` the processor also needs `get`, `watch` and `list` permissions for `jobs` resources.
329
329
330
330
Here is an example of a `ClusterRole` to give a `ServiceAccount` the necessary permissions for all pods, nodes, and namespaces in the cluster (replace `<OTEL_COL_NAMESPACE>` with a namespace where collector is deployed):
0 commit comments