You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: exporter/exporterhelper/README.md
+18-37Lines changed: 18 additions & 37 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,15 +21,16 @@ The following configuration options can be modified:
21
21
-`sending_queue`
22
22
-`enabled` (default = true)
23
23
-`num_consumers` (default = 10): Number of consumers that dequeue batches; ignored if `enabled` is `false`
24
-
-`wait_for_result` (default = false): determines if incoming requests are blocked until the request is processed or not.
24
+
-`wait_for_result` (default = false): Determines if incoming requests are blocked until the request is processed or not.
25
25
-`block_on_overflow` (default = false): If true, blocks the request until the queue has space otherwise rejects the data immediately; ignored if `enabled` is `false`
26
26
-`sizer` (default = requests): How the queue and batching is measured. Available options:
27
27
-`requests`: number of incoming batches of metrics, logs, traces (the most performant option);
28
28
-`items`: number of the smallest parts of each signal (spans, metric data points, log records);
29
29
-`bytes`: the size of serialized data in bytes (the least performant option).
30
30
-`queue_size` (default = 1000): Maximum size the queue can accept. Measured in units defined by `sizer`
31
31
-`batch`: see below.
32
-
-`concurrency_controller` (default = none): The ID of an extension implementing the `ConcurrencyController` interface (e.g., `adaptive_concurrency`). When configured, this extension dynamically manages the number of concurrent requests sent to the backend based on real-time signals like latency and error rates, providing adaptive backpressure to prevent downstream overload.
32
+
-`concurrency_controller` (default = none): The ID of an extension implementing the `RequestMiddlewareFactory` interface (e.g., `adaptive_concurrency`). When configured, exporterhelper executes export requests through the middleware, enabling logic such as adaptive concurrency, rate limiting, or circuit breaking.
To use dynamic concurrency control, the following setting needs to be set:
149
-
150
-
-`sending_queue`
151
-
-`concurrency_controller` (default = none): When set, enables adaptive backpressure by using the specified extension to dynamically manage the number of concurrent requests.
152
-
153
-
#### How it works
154
-
155
-
Traditionally, exporters use a static `num_consumers` to determine how many concurrent requests can be sent to a backend. However, static limits are difficult to tune:
156
-
-**Too high:** You risk overwhelming the downstream backend, leading to increased latency, 429 (Too Many Requests) errors, and "death spirals."
157
-
-**Too low:** You underutilize the available network and backend capacity, causing the collector's internal queue to fill up unnecessarily.
158
-
159
-
The Concurrency Controller implementation (e.g., `adaptive_concurrency`) replaces the fixed worker pool with a dynamic permit system based on the **AIMD (Additive Increase / Multiplicative Decrease)** algorithm.
1.**Acquire:** Before an export attempt begins, the exporter asks the controller for a permit. If the current dynamic limit is reached, the request blocks until a slot becomes available.
164
-
2.**Measure:** The controller tracks the **Round Trip Time (RTT)** and the outcome (success or retryable error) of every request.
165
-
3.**Adapt:** At regular intervals, the controller compares the recent RTT baseline against current performance:
166
-
-**Increase:** If latency is stable and requests are succeeding, the controller increases the concurrency limit to maximize throughput.
167
-
-**Decrease:** If latency spikes or the backend returns "backpressure" signals (like HTTP 429 or gRPC `ResourceExhausted`), the controller immediately shrinks the limit to allow the backend to recover.
149
+
Traditionally, exporters use a static `num_consumers` to determine how many concurrent requests can be sent to a backend. A Request Middleware implementation allows extensions to replace or augment this behavior with dynamic logic.
168
150
169
-
This feedback loop ensures the Collector automatically finds the "sweet spot" of maximum throughput without requiring manual tuning as network conditions or backend capacity change.
151
+
The middleware wraps the request execution, allowing it to:
152
+
1.**Intercept:** Acquire permits or check conditions before the request starts (e.g., rate limiting).
153
+
2.**Measure:** Track the duration and outcome of the request (e.g., adaptive concurrency).
154
+
3.**Control:** Block or fail requests based on internal logic.
170
155
171
156
#### Interaction with num_consumers
172
157
173
-
When a concurrency_controller is configured, it acts as a gatekeeper on top of the existing queue consumers. The effective concurrency is the minimum of the controller's dynamic limit and the static num_consumers.
174
-
175
-
To ensure the controller has enough headroom to operate, this component enforces a minimum of 200 consumers when a controller is active.
176
-
177
-
- Automatic Adjustment: If you explicitly set num_consumers to a low value (e.g., 10), it will be automatically increased to 200 to prevent artificial bottlenecks.
158
+
When a middleware is configured (via `concurrency_controller`), it acts as a gatekeeper on top of the existing queue consumers. The effective concurrency is the minimum of the middleware's logic and the static `num_consumers`.
178
159
179
-
- High Concurrency: If you need more than 200 concurrent requests (e.g., num_consumers: 500), your configured value will be respected.
**Recommendation:** generally, you do not need to configure num_consumers when using the controller; the default headroom (200) is sufficient for most use cases. Only increase it if you expect to exceed 200 concurrent requests.
162
+
-**Warning:** If you leave `num_consumers` at the default value (10) while using middleware that requires high concurrency (like Adaptive Request Concurrency), the queue sender will log a warning.
163
+
-**Recommendation:** Set `num_consumers` high enough to avoid capping the middleware’s maximum intended concurrency (for example, match the middleware’s configured max).
182
164
183
165
#### Example Configuration
184
166
185
-
In this example, the OTLP exporter is configured to use the `adaptive_concurrency` extension. The extension will start with a small number of parallel requests and automatically scale up to 100 based on the health of the OTLP endpoint.
167
+
In this example, an OTLP exporter is configured to use the `adaptive_concurrency` extension (which implements the Request Middleware interface).
186
168
187
169
```yaml
188
170
exporters:
189
171
otlp:
190
172
endpoint: https://my-backend:4317
191
173
sending_queue:
192
174
enabled: true
193
-
# Link to the concurrency controller extension defined below
175
+
num_consumers: 100# Provide headroom for the middleware
0 commit comments