| Status | |
|---|---|
| Stability | development: logs |
| Distributions | contrib |
| Issues |
The drain processor applies the Drain log clustering algorithm to log records as they pass through the pipeline. For each record it derives a template string (e.g. "user <*> logged in from <*>") and attaches it as an attribute on the record.
This processor annotates; it does not filter. Use the filter processor downstream to act on the log.record.template attribute — for example, to drop entire classes of noisy logs by pattern.
Drain builds a parse tree from the token structure of log lines. Lines with similar structure are grouped into a cluster, and a template is derived by replacing variable tokens with <*> wildcards. As more logs arrive the templates become more accurate and stable.
Templates are derived from the token structure of each log line and become more stable as more logs are observed. Use the template string for filtering rules; it converges to the same value across instances given the same configuration and log patterns (see Deployment considerations).
processors:
drain:
# Drain parse tree parameters
log_cluster_depth: 4 # default: 4 (minimum: 3)
sim_threshold: 0.4 # default: 0.4, range [0.0, 1.0]
max_children: 100 # default: 100
max_clusters: 0 # default: 0 (unlimited, LRU eviction when > 0)
extra_delimiters: [] # default: [] (extra token delimiters beyond whitespace)
# Body extraction
body_field: "" # default: "" (use full body string)
# Output attribute name
template_attribute: "log.record.template" # default
# Seeding (optional)
seed_templates: []
seed_logs: []
# Warmup mode
warmup_mode: passthrough # default: "passthrough" | "buffer"
warmup_min_clusters: 10 # default: 10 (only used when warmup_mode: buffer)
warmup_buffer_max_logs: 10000 # default: 10000 (only used when warmup_mode: buffer)| Field | Type | Default | Description |
|---|---|---|---|
log_cluster_depth |
int | 4 |
Max depth of the Drain parse tree. Higher values produce more specific templates. Minimum: 3. |
sim_threshold |
float | 0.4 |
Similarity threshold in [0.0, 1.0]. Lines below this threshold create a new cluster rather than merging with an existing one. |
max_children |
int | 100 |
Maximum children per parse tree node. |
max_clusters |
int | 0 |
Maximum clusters tracked. When exceeded, the least-recently-used cluster is evicted. 0 means unlimited. |
extra_delimiters |
[]string | [] |
Additional token delimiters beyond whitespace (e.g. [",", ":"]). |
body_field |
string | "" |
If set, and the log body is a structured map, the value of this top-level key is used as the text to template instead of the full body. |
template_attribute |
string | "log.record.template" |
Attribute key written with the derived template string. |
seed_templates |
[]string | [] |
Template strings to pre-load at startup (see Seeding). |
seed_logs |
[]string | [] |
Raw example log lines to train on at startup (see Seeding). |
warmup_mode |
string | "passthrough" |
Controls behavior during the warmup period. "passthrough" (default) or "buffer" (see Warmup mode). |
warmup_min_clusters |
int | 10 |
Minimum distinct clusters before warmup ends. Only used when warmup_mode: buffer. |
warmup_buffer_max_logs |
int | 10000 |
Maximum records to buffer before flushing regardless of cluster count. Only used when warmup_mode: buffer. Must be > 0. |
Seeding pre-populates the Drain tree before any live logs arrive. This is the primary mechanism for stable templates across restarts.
Provide known template strings directly. The processor trains on each entry at startup, establishing clusters for those patterns immediately.
processors:
drain:
seed_templates:
- "user <*> logged in from <*>"
- "connected to <*>"
- "heartbeat ping <*>"Provide raw example log lines. The processor trains on them at startup, letting Drain derive the templates itself. Useful when exact template strings are not known in advance.
processors:
drain:
seed_logs:
- "user alice logged in from 10.0.0.1"
- "user bob logged in from 192.168.1.1"
- "connected to 10.0.0.1"Empty and whitespace-only entries in both lists are silently skipped.
Each collector instance builds its Drain parse tree independently in memory. Two instances processing the same log patterns will converge on identical templates because the Drain algorithm is deterministic: given the same configuration and a representative sample of log forms, the same token structure produces the same template string.
The main caveat is the early training phase. Before an instance has seen enough lines to abstract a wildcard (e.g. before "user alice logged in" and "user bob logged in" have both been observed), different instances may temporarily produce different templates for the same logical pattern. This is most noticeable at startup with low-volume or highly variable log streams.
Mitigations:
- Use
seed_templatesorseed_logsto pre-load known patterns at startup. With a comprehensive seed set, instances start in an already-converged state and live training only fills in the gaps. - Use
bufferwarmup mode if downstream consumers require stable templates from the first record they receive.
The warmup_mode setting controls what happens before the parse tree has stabilized — i.e. before it has observed enough distinct log forms to produce reliable, abstracted templates.
| Mode | Behavior | Trade-off |
|---|---|---|
passthrough (default) |
Annotates every record immediately. Early records may receive less-abstracted templates (e.g. a raw line rather than a wildcarded form) that change as more data arrives. | No latency or memory overhead. Downstream consumers must tolerate template churn at startup. |
buffer |
Holds records in memory until warmup_min_clusters distinct templates have been observed, or warmup_buffer_max_logs is reached. Flushes all buffered records at once, fully annotated. |
Templates are stable from the first record downstream sees. Adds startup latency and memory pressure proportional to buffer size. |
Choose passthrough when:
- Downstream consumers are tolerant of occasional template changes (e.g. they use templates for volume aggregation where a brief inconsistency is acceptable).
- You are using
seed_templatesorseed_logsto pre-stabilize the tree.
Choose buffer when:
- A downstream
filterprocessor must reliably match templates from the very first record — emitting an unstabilised template could cause records to pass through a filter they should have been dropped by. - You have strict ordering or completeness requirements and cannot tolerate records being annotated with different templates for the same log pattern.
processors:
drain:
warmup_mode: buffer
warmup_min_clusters: 20
warmup_buffer_max_logs: 5000Memory note: in buffer mode, all records are held in memory until flush. Size the buffer with
warmup_buffer_max_logsaccording to your available memory and expected log volume during startup.
The processor emits the following internal telemetry metrics:
| Metric | Type | Description |
|---|---|---|
otelcol_processor_drain_clusters_active |
gauge | Current number of active clusters in the Drain parse tree. Useful for tracking tree growth and stability over time. |
otelcol_processor_drain_log_records_annotated |
counter | Number of log records successfully annotated with a template. |
otelcol_processor_drain_log_records_unannotated |
counter | Number of log records not annotated — empty body, Train error, or no cluster returned by Drain. |
The processor sets the following attribute on each log record:
| Attribute | Type | Example | Description |
|---|---|---|---|
log.record.template |
string | "user <*> logged in from <*>" |
The Drain-derived template string. Stable within an instance once the tree has warmed up. Use this for filtering rules. |
The attribute name is configurable via template_attribute.
Semantic conventions:
log.record.templatealigns with the proposed OTel attribute in open-telemetry/semantic-conventions#1283 and #2064. These names may be updated if a convention is formally adopted.
The following pipeline annotates logs with Drain templates and then drops known noisy patterns using the filter processor:
processors:
drain:
log_cluster_depth: 4
sim_threshold: 0.4
max_clusters: 500
seed_templates:
- "user <*> logged in from <*>"
- "connected to <*>"
- "heartbeat ping <*>"
warmup_mode: buffer
warmup_min_clusters: 20
warmup_buffer_max_logs: 5000
filter/drop_noisy:
error_mode: ignore
logs:
log_record:
- attributes["log.record.template"] == "heartbeat ping <*>"
- attributes["log.record.template"] == "connected to <*>"
service:
pipelines:
logs:
receivers: [otlp]
processors: [drain, filter/drop_noisy]
exporters: [otlp]body_field is a convenience for pipelines where the log body is a structured map and you do not have full control over how upstream processors shape it.
If you do control the pipeline, the preferred approach is a move operator in the filelog receiver (or equivalent) to promote the message field back to a plain string body before the drain processor sees the record:
operators:
- type: json_parser
- type: move
from: body.message
to: bodyIf you cannot do that — for example, logs arrive via OTLP already structured — set body_field to the map key whose value should be fed to Drain:
processors:
drain:
body_field: "message"Given a log body {"level": "info", "message": "user alice logged in from 10.0.0.1"}, only the message value is fed to Drain. The full body is used unchanged if the field is absent or the body is not a map.
Note:
body_fieldonly supports a single top-level key. Full OTTL path expressions (e.g.body["event"]["message"]) are not supported and are noted as a future extension.
- Snapshot persistence: save and restore the Drain tree state across restarts, eliminating the need for seeding. This requires serialization support and is tracked as a future improvement.
- OTTL body extraction: support full OTTL path expressions for
body_fieldinstead of a single top-level key name. - Multi-instance synchronization: optional shared snapshot file or gossip-based tree merging for consistent templates across horizontally scaled deployments.