This directory contains two binaries for a two-terminal OTLP demo over TCP.
-
otlp-bofh-emitter- emits a real OTLP/HTTP protobuf log batch every second
- prints the exact log content it is sending in plain text
-
otlp-bofh-grpc-emitter- emits a real OTLP/gRPC log batch every second
- prints the exact log content it is sending in plain text
-
otlp-demo-collector- listens on a TCP socket for OTLP/HTTP
POST /v1/logs - decodes the protobuf payload
- prints the same log content to stdout with terminal styling
- listens on a TCP socket for OTLP/HTTP
It also contains scenario demos under subdirectories:
logjet-file- OTLP/HTTP emitter into file-backed
ljd
- OTLP/HTTP emitter into file-backed
logjet-grpc-file- OTLP/gRPC emitter into file-backed
ljd
- OTLP/gRPC emitter into file-backed
kill-bill- cut a
.logjetfile down to its middle third and recover later good blocks
- cut a
memory-buffer- kept-front-jar plus rotating-tail memory retention
drain-once- preserved startup messages are consumed on the first drain and do not appear on the second
multi-emitter- five emitters into one
ljd, then late replay into one collector
- five emitters into one
multi-emitter-continuous- five emitters running continuously into one
ljdand one live collector
- five emitters running continuously into one
multi-client-behaviour- one replay client stalls while another keeps flowing
replay-handoff- a late replay client drains retained backlog and then continues live on the same connection
cpp-shared-lib- a C++ process loads
liblogjet.so, sends OTLP logs intoljd, and opens the result inljx view
- a C++ process loads
file-replay- replay stored
.logjetfiles into a collector
- replay stored
file-tooling- inspect rotated file segments and prune archived files deliberately
parquet-export- generate about 5K BOFH log entries, then export that
.logjetfile to Parquet through the external exporter plugin
- generate about 5K BOFH log entries, then export that
tui-view- generate 1000 randomized log entries and open
ljx viewon the result
- generate 1000 randomized log entries and open
bridge-resume- consumer restart resumes from persisted sequence state without replaying from zero
upstream-reset-resume- consumer bridge detects upstream reset and resumes a fresh stream instead of getting stuck
backpressure- slow collector demo showing
block,disconnect, anddrop-newest
- slow collector demo showing
ingest-guardrails- oversized-batch rejection and concurrent ingest-client cap
ingest-overload- rate-limited ingest with operator-visible counters and severity-aware shedding
remote-drain- appliance-side
ljddrained by a remote-sideljd bridge
- appliance-side
remote-drain-tls- same remote-drain topology, but with TLS and mutual TLS on the replay link
secure-pipeline- HTTPS OTLP ingest into
ljd, then HTTPS collector export on replay
- HTTPS OTLP ingest into
proxy-to-vector- appliance-side
ljdreplayed throughljd bridgeinto Vector stdout over OTLP/HTTP or OTLP/gRPC
- appliance-side
Open two terminals in the project root.
Terminal 1: start the collector
cargo run -p otlp-demo --bin otlp-demo-collector -- 127.0.0.1:4318Terminal 2: start the emitter
cargo run -p otlp-demo --bin otlp-bofh-emitter -- 127.0.0.1:4318Or use the gRPC emitter against an OTLP/gRPC logs endpoint:
cargo run -p otlp-demo --bin otlp-bofh-grpc-emitter -- 127.0.0.1:4317The emitter prints plain output like:
service=bofh-emitter scope=logjet-demo-emitter severity=WARN ts=1700000000000000000
message: BOFH excuse #1: magnetic interference from a mislabeled coffee mug
The collector prints the same fields with terminal styling.
If you do not pass an address:
- collector binds to
0.0.0.0:4318 - emitter sends to
127.0.0.1:4318
- the transport is OTLP/HTTP protobuf
- the gRPC emitter uses OTLP/gRPC logs export
- the collector is intentionally tiny and is only for demos and quick local setups
- this is useful when setting up a real OTel Collector would be overkill