Description
Today agents support sending k8s data if the user explicitly configures pods to be injected with KUBERNETES_NAMESPACE, KUBERNETES_POD_NAME, KUBERNETES_POD_UID, KUBERNETES_NODE_NAME
: https://www.elastic.co/guide/en/observability/current/apm-api-metadata.html#apm-api-kubernetes-data and #21 (comment)
Whilst discussing if we can improve our host.name
detection here, I noticed the following environment variable should be exposed to all pods natively
KUBERNETES_SERVICE_HOST
, KUBERNETES_SERVICE_PORT
@trentm also pointed out kubectl
itself relies on these to observe its running inside a pod: https://kubernetes.io/docs/reference/kubectl/#in-cluster-authentication-and-namespace-overrides
The suggestion here to start reporting these as agents and to extend the apm intake protocol to actually ingest this information into Elasticsearch.
Lastly we currently set host.name
to kubernetes.host_name
we detect from KUBERNETES_NODE_NAME
: https://github.com/elastic/apm-data/blob/main/model/modelprocessor/hostname.go#L39
However we don't record the inverse, we are running under k8s but KUBERNETES_NODE_NAME
was not explicitly configured. I believe we need to record this flag in Elasticsearch so that the Hosts View can filter this data out correctly to fix: https://github.com/elastic/observability-dev/issues/3321.
Sending and recording KUBERNETES_SERVICE_HOST
, KUBERNETES_SERVICE_PORT
would allow us to detect and record that flag.