-
Notifications
You must be signed in to change notification settings - Fork 3.3k
Open
Labels
bugSomething isn't workingSomething isn't workingneeds triageNew item requiring triageNew item requiring triagereceiver/prometheusPrometheus receiverPrometheus receiver
Description
Component(s)
receiver/prometheus
What happened?
Description
Basic Auth and OAuth2 have the same issue: the password or client secret is not being decoded correctly, which results in an unauthorized error.
Steps to Reproduce
Cluster Role/Role/Role Bindings
# 1. ServiceAccount
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: otel-target-allocator-sa
namespace: infra
---
# 2. Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: otel-target-allocator-role
namespace: infra
rules:
- apiGroups:
- ""
resources:
- pods
- services
- endpoints
- configmaps
- secrets
- namespaces
verbs:
- get
- watch
- list
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- get
- watch
- list
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- watch
- list
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- watch
- list
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
- podmonitors
- scrapeconfigs
- probes
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otel-target-allocator
rules:
# Core resources
- apiGroups: [""]
resources:
- pods
- services
- endpoints
- namespaces
- nodes
- configmaps
verbs: [get, list, watch]
# Node metrics
- apiGroups: [""]
resources:
- nodes/metrics
verbs: [get, list, watch]
# EndpointSlices
- apiGroups: ["discovery.k8s.io"]
resources:
- endpointslices
verbs: [get, list, watch]
# Ingresses
- apiGroups: ["networking.k8s.io"]
resources:
- ingresses
verbs: [get, list, watch]
# Prometheus Operator CRDs
- apiGroups: ["monitoring.coreos.com"]
resources:
- servicemonitors
- podmonitors
- scrapeconfigs
- probes
verbs: ["*"]
# Non-resource URLs
- nonResourceURLs:
- /metrics
- /api
- /api/*
- /apis
- /apis/*
verbs: [get]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otel-target-allocator
subjects:
- kind: ServiceAccount
name: otel-target-allocator-sa
namespace: infra
roleRef:
kind: ClusterRole
name: otel-target-allocator
apiGroup: rbac.authorization.k8s.io
---
# 3. RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: otel-target-allocator-rb
namespace: infra
subjects:
- kind: ServiceAccount
name: otel-target-allocator-sa
namespace: infra
roleRef:
kind: Role
name: otel-target-allocator-role
apiGroup: rbac.authorization.k8s.io
Secret
apiVersion: v1
kind: Secret
metadata:
name: Test-secret
namespace: infra
type: Opaque
stringData:
username: 'admin'
password: 'Test@123'
ScrapeConfig
apiVersion: monitoring.coreos.com/v1alpha1
kind: ScrapeConfig
metadata:
namespace: infra
name: target-scrape-config
labels:
prometheus.target.allocator: "enabled"
spec:
jobName: test
metricsPath: '/v1/metrics'
tlsConfig:
insecureSkipVerify: true
serverName: '10.87.218.92'
scheme: HTTPS
basicAuth:
username:
name: Test-secret
key: username
password:
key: password
name: Test-secret
staticConfigs:
- targets: ['10.87.218.92']
params:
id: ['e83134ff-89fb-45eb-97ae-920b35f8fde5']
Prometheus Receiver
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: prometheus-receiver
namespace: infra
spec:
mode: statefulset
targetAllocator:
enabled: true
serviceAccount: otel-target-allocator-sa
replicas: 1
prometheusCR:
enabled: true
allowNamespaces:
- infra
scrapeConfigSelector:
matchLabels:
prometheus.target.allocator: "enabled"
podMonitorSelector:
matchLabels:
non-existent-label: "true"
serviceMonitorSelector:
matchLabels:
non-existent-label: "true"
config:
receivers:
prometheus:
config:
scrape_configs: []
processors:
batch:
timeout: 10s
send_batch_size: 1000
send_batch_max_size: 1000
memory_limiter:
check_interval: 1s
limit_mib: 60
spike_limit_mib: 40
exporters:
debug:
verbosity: normal
service:
pipelines:
metrics:
receivers: [prometheus]
processors: [memory_limiter,batch]
exporters: [debug]
telemetry:
logs:
level: debug
managementState: 'managed'
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256Mi
Expected Result
It should scrape the target
Actual Result
{"target": "https://10.87.218.92/v1/metrics?id=e83134ff-89fb-45eb-97ae-920b35f8fde5", "err": "server returned HTTP status 401 Unauthorized"}
Collector version
0.143.0
Environment information
Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
OpenTelemetry Collector configuration
It worked fine without targetallocator when I give config in place
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: prometheus-receiver
namespace: infra
spec:
mode: statefulset
config:
receivers:
prometheus:
config:
scrape_configs:
- job_name: 'my-service'
metrics_path: '/v1/metrics'
scheme: https
static_configs:
- targets: ['10.87.218.92']
tls_config:
insecure_skip_verify: true
basic_auth:
username: 'admin'
password: 'Test@123'
params:
id: ['e83134ff-89fb-45eb-97ae-920b35f8fde5']
processors:
batch:
timeout: 10s
send_batch_size: 1000
send_batch_max_size: 1000
memory_limiter:
check_interval: 1s
limit_mib: 60
spike_limit_mib: 40
exporters:
debug:
verbosity: normal
service:
pipelines:
metrics:
receivers: [prometheus]
processors: [memory_limiter, batch]
exporters: [debug]
managementState: 'managed'
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256Mi
Log output
Additional context
No response
Tip
React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding +1 or me too, to help us triage it. Learn more here.
khushijain21 and ChiragR7
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingneeds triageNew item requiring triageNew item requiring triagereceiver/prometheusPrometheus receiverPrometheus receiver