Description
Version of dd-trace-go
1.67.1
Describe what happened:
Hello! Upon reviewing our Datadog usage in September we noticed a large spike in trace ingestion volume that started on September 10th, which is when we deployed an update of dd-trace-go
from 1.67.0 to 1.67.1. After reverting back to 1.67.0 today, we're now seeing more expected trace volume levels.
I inspected the datadog.estimated_usage.apm.ingested_bytes
metric and found that the spike was isolated to one sampling_resource_name
value (which was an endpoint of another service that calls this service a lot):
Before the upgrade, this endpoint would generate around 4GB of traces per day and after the upgrade it was generating around 80GB — 100GB a day, which is quite a huge leap.
Also potentially worth noting, when I applied a "sum by" of the sampling_resource_name
on the datadog.estimated_usage.apm.ingested_bytes
metric, before the upgrade there was a sampling_resource_name:unknown
value that was generating around 30GB of traces a day. The value went away when we upgraded and it has now returned after reverting it.
Please let me know if there are any other details that might be helpful to share.
Describe what you expected:
I would expect trace ingestion volume to remain about the same, or based on the 2 PRs included in the changelog:
- ddtrace/tracer: add IsTraceRoot to clients-side-stats #2821
- ddtrace/tracer: fixed resampling to occur on root span only #2824
...the latter PR seems to reduce "resampling" and with my limited understanding of how this library works I might assume lower trace ingestion volume based on that.
Steps to reproduce the issue:
I didn't reproduce this in an isolated way.
Additional environment details (Version of Go, Operating System, etc.):
Go version: 1.23
OS: Alpine linux 3.20.1
Environment variables:
DD_TRACE_SAMPLE_RATE
:0.1
DD_TRACE_SAMPLING_RULES
:[{"service": "primary.db", "sample_rate": 0.03}, {"service": "replica.db", "sample_rate": 0.03}]