-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Description
Component(s)
service
What happened?
Describe the bug
When I have this pipeline configuration for internal trace:
service:
telemetry:
traces:
propagators:
- "tracecontext"
processors:
- batch:
exporter:
otlp:
endpoint: https://otlp.datadoghq.com/v1/traces
protocol: http/protobuf
At the same time, I also have OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://otlp.example.com/v1/traces) as an environment in my pod, I found this error log:
traces export: processor export timeout: retry-able request failure: Post "http://otlp.datadoghq.com/v1/traces": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
We can see the endpoint changed to http instead of https
I removed the OTEL_EXPORTER_OTLP_TRACES_ENDPOINT the environment, the timeout error disappeared, span data send to the desired endpoint without problem.
Steps to reproduce
- Create a collector-config.yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
memory_limiter:
# 80% of maximum memory up to 2G
limit_mib: 1500
# 25% of limit up to 2G
spike_limit_mib: 512
check_interval: 5s
extensions:
zpages: {}
exporters:
debug:
service:
extensions: [zpages]
telemetry:
traces:
propagators:
- "tracecontext"
processors:
- batch:
exporter:
otlp:
endpoint: "https://otlp.datadoghq.com/v1/traces"
protocol: http/protobuf
pipelines:
traces/1:
receivers: [otlp]
processors: [memory_limiter]
exporters: [debug]
- Run following docker cmd
docker run \
-e OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://example.com/v1/traces \
-e CLUSTER_NAME=dev-cluster \
-p 127.0.0.1:4317:4317 \
-p 127.0.0.1:4318:4318 \
-v $(pwd)/collector-config.yaml:/etc/otelcol/config.yaml \
otel/opentelemetry-collector:0.141.0 \
--config /etc/otelcol/config.yaml
- Send a trace to this docker container:
go install github.com/open-telemetry/opentelemetry-collector-contrib/cmd/telemetrygen@latest
telemetrygen traces --otlp-insecure --traces 1
- See the log of docker:
2025-12-14T01:05:00.359Z info Traces {"resource": {"service.instance.id": "5b61d3bc-0d3a-48d2-be09-cbb165288cc3", "service.name": "otelcol", "service.version": "0.141.0"}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "traces", "resource spans": 1, "spans": 2}
2025/12/14 01:05:32 traces export: Post "http://otlp.datadoghq.com/v1/traces": processor export timeout
You can find the Post Url changed to http://otlp.datadoghq.com/v1/traces, the endpoint configuration in step 2 configured https instead.
What did you expect to see?
File configured endpoint take precedence than the environment variable.
What did you see instead?
u.shema uses OTEL_EXPORTER_OTLP_TRACES_ENDPOINT, u.Path uses the endpoint in config file.
Collector version
We build our own collector pod with the latest libraries, v0.141.0.
Environment information
Environment
OS: busybox:latest
Compiler(if manually compiled): golang:1.25-alpine
OpenTelemetry Collector configuration
service:
telemetry:
traces:
propagators:
- "tracecontext"
processors:
- batch:
exporter:
otlp:
endpoint: https://otlp.datadoghq.com/v1/traces
protocol: http/protobufLog output
traces export: processor export timeout: retry-able request failure: Post "http://otlp.datadoghq.com/v1/traces": context deadline exceeded (Client.Timeout exceeded while awaiting headers)Additional context
No response
Tip
React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding +1 or me too, to help us triage it. Learn more here.