Skip to content

influxdata/influxdb3-ref-bess

Repository files navigation

influxdb3-ref-bess

Reference architecture: InfluxDB 3 Enterprise for Battery Energy Storage Systems.

Runnable in two minutes with docker compose. Generates simulated telemetry for 4 battery packs × 192 cells (~2,000 points/sec), runs anomaly detection and an HTTP API as Python plugins inside the database, and ships a control-room dashboard (site-state banner, pack status cards, animated energy-flow schematic, pack-tabbed cell heatmap, event log) — all built on the InfluxDB 3 Enterprise feature set.

Headline Enterprise features on display

What you'll actually see when you run make demo.

⬢ Processing Engine — Python plugins running INSIDE the database

Three trigger types, each in its own short Python file under plugins/:

Trigger File Fires on What it does
WAL wal_thermal_runaway.py every write to cell_readings Inspects each batch; if any cell > 70 °C, writes a row to the alerts table.
Schedule schedule_soh_daily.py cron 0 5 0 * * * (daily 00:05 UTC) Aggregates the prior 24h into pack_rollup_1h.
Request request_pack_health.py GET /api/v3/engine/pack_health Serves JSON: latest pack state + most recent alert. No app server, no middleware.

The thermal-runaway scenario in the demo proves the WAL trigger is real: alert rows you query at the end were written by the plugin, not the simulator.

⬢ Last Value Cache + Distinct Value Cache — table-valued SQL functions

The LVC keeps the latest value for each unique tag set in memory. A single-row lookup is sub-millisecond; the full 768-row scan in this demo runs in 5–20 ms — roughly an order of magnitude faster than running the equivalent ORDER BY time DESC LIMIT 768 against the underlying cell_readings table, and the cost stays flat as history grows.

SELECT pack_id, module_id, cell_id, voltage, temperature_c
FROM last_cache('cell_readings', 'cell_last')   -- exactly 768 rows
ORDER BY pack_id, module_id, cell_id;

SELECT cell_id FROM distinct_cache('cell_readings', 'cell_id_distinct');

The UI's heatmap polls the LVC every 5 seconds; that's how it stays cheap as the underlying table grows.

The LVC and DVC each need their target table to exist when the cache is created. init.sh declares every BESS table up front via POST /api/v3/configure/table (influxdb3 create table <name> --database bess --tags ... --fields name:type,...) before creating the caches, so a fresh data volume can boot from zero into a fully-functional dashboard without waiting for the simulator's first write. See influxdb/schema.md.

⬢ Offline admin token bootstrap

A production-ready pattern for getting an admin token into a fresh Enterprise server with no manual setup:

  1. A one-shot token-bootstrap container generates the offline admin token: influxdb3 create token --admin --offline --output-file /var/lib/influxdb3/.bess-operator-token.
  2. The main influxdb3 server starts with --admin-token-file <that-file> — adopting it as the admin.
  3. The init container and downstream services (simulator, UI) read the same file from a shared volume.

See docker-compose.yml and ARCHITECTURE.md § 6 for the full flow. (The recovery endpoint at :8182 only regenerates existing admin tokens — it can't bootstrap one. This pattern can.)

Architecture — who does what

Four moving pieces. Knowing which is which makes the rest of this repo (and the demo narration) much easier to follow:

Actor Container / process Role
[script] your terminal Orchestrates docker compose; runs CLI/curl commands during make demo.
[ui] bess-ui (FastAPI+HTMX) Browser at localhost:8080 polls FastAPI partials every 1–5s; FastAPI runs SQL against InfluxDB 3 and renders Jinja2 fragments.
[sim] bess-simulator (Python) Continuously writes line protocol — 768 cells, 4 packs, 1 inverter — at the configured rate.
[db] bess-influxdb3 (InfluxDB 3 Enterprise) The database AND the runtime for the Python plugins. The Processing Engine triggers fire here, in-process.
flowchart LR
  subgraph host[Host machine]
    direction LR
    sim["simulator<br/>(Python)"]
    boot["token-bootstrap<br/>(one-shot)"]
    inf["InfluxDB 3<br/>Enterprise<br/>+ plugins"]
    init["influxdb3-init<br/>(DB, caches,<br/>triggers)"]
    ui["UI<br/>(FastAPI+HTMX)"]
    scn["scenarios<br/>(on-demand)"]
  end
  browser([user browser])

  browser -- :8080 --> ui
  browser -- :8181 API --> inf
  boot -. creates admin token .-> inf
  init -- init sequence --> inf
  sim -- line protocol --> inf
  ui -- SQL (read) --> inf
  ui -- /api/pack_health --> inf
  scn -- line protocol --> inf
  inf -. WAL trigger .-> inf
  inf -. Schedule trigger .-> inf
  inf -. Request trigger .-> inf
Loading

(Source: diagrams/architecture.mmd — GitHub renders the block above natively.)

What's in this repo

File / directory Purpose
docker-compose.yml Token bootstrap + InfluxDB 3 Enterprise + init + simulator + UI + scenarios.
simulator/ Python simulator producing pack/cell/inverter telemetry.
simulator/scenarios/ Curated event injectors: thermal_runaway, cell_drift.
plugins/ Three Processing Engine plugins: WAL (wal_thermal_runaway), Schedule (schedule_soh_daily), Request (request_pack_health).
ui/ FastAPI + HTMX + uPlot control-room dashboard — site-state banner, KPI strip with sparklines, energy-flow schematic, pack cards, charts, pack-tabbed cell heatmap, event log, Request-trigger diagnostics.
influxdb/init.sh Idempotent: creates DB, caches, triggers on first boot.
tests/ Plugin unit tests + smoke test.
ARCHITECTURE.md Schema decisions, scaling notes, all the gotchas.
SCENARIOS.md Curated scenarios explained.
CLI_EXAMPLES.md Curated influxdb3 CLI commands.
FOR_MAINTAINERS.md CI license-volume refresh notes.
scripts/demo.sh Scripted end-to-end walk-through (see make demo).

The fastest path — make demo

Runs the whole story as one colored, sectioned terminal walk-through. Opens with a printed overview — the same headline-features and actor breakdown as above — then waits on a keypress so you can talk through the architecture before the stack starts and the browser steals focus. After the keypress: prereq check, email prompt (first run only), stack up, browser opens to the UI, fires the thermal_runaway scenario, queries the alerts the WAL plugin wrote, calls the HTTP Request trigger, reads the Last Value Cache.

make demo          # reuse an already-validated license volume (~30s after keypress)
make demo-fresh    # wipe state first — prompts for email and waits for click

What you see at each step. The [script]/[ui]/[sim]/[db] columns map to the actor table above:

# Actors Step What happens
1 [script] Prereqs Verifies Docker is running.
2 [script] Trial license — email Reuses .env if present, otherwise prompts for an email address.
3 [script] → all Bring up the stack docker compose up -d; 6 services start in dependency order.
4 [db] License validation (Fresh only.) Blocks until you click the validation link; waits up to 10 min.
5 [script] → all Stack initialization Four spinners: HTTP responding, init complete, simulator writing, UI serving.
6 [script][ui] Open UI Auto-opens http://localhost:8080. From here on the browser polls FastAPI in bess-ui every 1–5s, independent of the script.
7 [script][sim][db] Fire thermal_runaway Runs a one-shot scenario container that writes 80 °C cell readings; the WAL trigger inside [db] writes alert rows.
8 [script][db] Show alerts influxdb3 query — prints the 5 rows the plugin wrote, not the simulator.
9 [script][db] Call the Request trigger curl /api/v3/engine/pack_health — JSON served from request_pack_health.py running inside [db].
10 [script][db] LVC count SELECT COUNT(*) FROM last_cache(...) — single value, in-memory, ~5 ms. Same cache the UI heatmap reads every 5s (full 768-row scan, ~5–20 ms).

Flags:

  • --freshmake clean && rm .env first; forces full validation flow.
  • --no-browser — skip auto-opening the browser (useful over SSH / in CI).
  • --no-pause — skip the intro keypress (auto-detected when stdin is not a TTY).
  • --help — prints usage.

The script exits non-zero on any failure and dumps the failing container's tail so you can debug.

Manual quickstart

If you want to drive each step yourself instead of make demo:

make up
# On first run, enter your email when prompted. Click the validation link
# in the email. The stack finishes starting automatically once validated.

# Open http://localhost:8080 — dashboard populates within ~5 seconds.

# Fire a scenario to light up the Processing Engine:
make scenario name=thermal_runaway

# Query the recent alerts the plugin produced:
make query sql="SELECT time, pack_id, value FROM alerts ORDER BY time DESC LIMIT 5"

# Or drop into the CLI shell with TOKEN pre-exported:
make cli
# then: iql 'SELECT COUNT(*) FROM cell_readings'

Stop and preserve data: make down Stop and wipe data (re-validate next time): make clean

Makefile targets

Target Purpose
make demo Scripted end-to-end walk-through (see "The fastest path" above).
make demo-fresh Same as demo but wipes state first, forcing re-validation.
make up Prompt for license email, write .env, start the full stack.
make down Stop services, keep data volume.
make clean Stop + drop data volume (license must be re-validated).
make logs Tail all service logs.
make ps Show service status.
make cli Shell into influxdb3 container with TOKEN pre-exported and iql alias.
make cli-example name=<foo> Run a named curated CLI example from CLI_EXAMPLES.md.
make query sql='<SQL>' One-shot query with the operator token.
make scenario name=<foo> Run a scenario (e.g. thermal_runaway).
make scenario-list List available scenarios.
make test / make test-unit / make test-scenarios Unit / scenario tests.
make lint / make format ruff.

Requirements

  • Docker Desktop or Docker Engine with Compose v2.
  • An email address you can click a validation link for (InfluxDB 3 Enterprise trial licenses require one-time email verification).
  • ~6 GB free disk for the Docker images and a local data volume.

How to read the code

  • plugins/ — three short Python files showing the three Processing Engine trigger types.
  • ui/queries.py — every query that matters for BESS, with docstrings explaining why each is the right query for this domain.
  • simulator/signals.py — the domain model in code (pack, cell, inverter shapes and realistic signal generators).
  • influxdb/init.sh — the exact CLI commands for creating a BESS database, caches, and triggers. Copy and adapt.

Part of

The InfluxDB 3 Reference Architectures portfolio. Other verticals: IIoT, Network Telemetry, Renewables, EV Charging, Fleet Telematics, Data Center, Oil & Gas.

License

Apache 2.0 — see LICENSE.

About

Reference architecture: InfluxDB 3 Enterprise for Battery Energy Storage Systems. Live demo (4 packs × 192 cells, ~2k pts/s) with Python Processing Engine plugins running inside the database — runnable in 2 minutes via docker compose.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors