You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: TELEMETRY.md
+75-4Lines changed: 75 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,16 @@
1
1
# Telemetry Configuration
2
2
3
-
Dispenser includes a built-in, high-performance telemetry system powered by [Delta Lake](https://delta.io/). It allows you to automatically collect deployment events and container health status, writing them directly to data lakes (S3, GCS, Azure) or local filesystems in Parquet format.
3
+
Dispenser includes a built-in, high-performance telemetry system powered by [Delta Lake](https://delta.io/). It allows you to automatically collect deployment events, container health status, application logs/traces, and raw container output, writing them directly to data lakes (S3, GCS, Azure) or local filesystems in Parquet format.
4
4
5
5
## Overview
6
6
7
7
The telemetry system runs in a dedicated, isolated thread to ensure that heavy I/O operations never block the main orchestration loop. It provides:
8
8
9
9
1.**Deployment Tracking**: Every time a container is created, updated, or restarted, a detailed event is logged.
10
10
2.**Health Monitoring**: Periodically samples the status of all managed containers (CPU, memory, uptime, health checks).
11
-
3.**Delta Lake Integration**: Writes data using the Delta Lake protocol, enabling ACID transactions, scalable metadata handling, and direct compatibility with tools like Spark, Trino, Athena, and Databricks.
11
+
3.**Application Telemetry (OTLP)**: Ingests structured logs and traces from services using standard OpenTelemetry SDKs.
12
+
4.**Container Output**: Captures raw `stdout` and `stderr` streams from all managed containers with sequence-guaranteed ordering.
13
+
5.**Delta Lake Integration**: Writes data using the Delta Lake protocol, enabling ACID transactions, scalable metadata handling, and direct compatibility with tools like Spark, Trino, Athena, and Databricks.
0 commit comments