Skip to content

Collector Instances Not Discovered Due to Case Sensitivity in matchLabels #3350

Open
@nicolastakashi

Description

Component(s)

target allocator

What happened?

Description

While running TA without using the OTel Operator, I spent some time trying to understand why the TA was not discovering my collector instances and just figured out that I was using this config file:

allocation_strategy: consistent-hashing
filter_strategy: relabel-config
collector_selector:
  matchLabels:
    app.kubernetes.io/instance: otel-integration
    app.kubernetes.io/name: opentelemetry-collector
prometheus_cr:
  enabled: true
  scrape_interval: 30s
  pod_monitor_selector: {}
  service_monitor_selector: {}

Highlight the matchLabels key. This is using camel case, which is the convention while working with Kubernetes resources, but TA’s unmarshal method is only accepting lower case.

Even though TA is using theLabelSelector under the k8s.io/apimachinery/pkg/apis/meta/v1 package, this is not working because this package doesn’t contain any annotations to YAML marshal.

Initially, I was thinking about converting the whole config into JSON and using the YAMLToJSON method available in the github.com/ghodss/yaml package, but after doing that I noticed the configs related to Prometheus from the github.com/prometheus/prometheus/config package stopped working, since they don’t support any JSON annotations for marshalling.

I had another idea in mind! Keep using YAML for marshalling and redefine theLabelMatcher struct inside the TA project.

I’d like to get your opinion on that and check if you folks see any other solution. Maybe ask Prometheus maintainers to add JSON annotations to the Config struct? Not sure. Looking forward to your feedback.

Steps to Reproduce

Start the TA using the following config

allocation_strategy: consistent-hashing
filter_strategy: relabel-config
collector_selector:
  matchLabels:
    app.kubernetes.io/instance: otel-integration
    app.kubernetes.io/name: opentelemetry-collector
prometheus_cr:
  enabled: true
  scrape_interval: 30s
  pod_monitor_selector: {}
  service_monitor_selector: {}

Expected Result

Be compatible with K8S semantics

Actual Result

Not compatibility with K8S semantics

Kubernetes Version

NA

Operator version

NA

Collector version

NA

Environment information

Environment

NA

Log output

No response

Additional context

No response

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions