This repository contains the TypeScript benchmark harnesses we use to exercise EventDBX alongside the databases it most often competes or interoperates with (PostgreSQL, MongoDB, Microsoft SQL Server, and the EventDBX control API). Every suite is designed to be an apples-to-apples comparison:
- All containers run locally on the same Docker bridge network.
- Each service is started with identical CPU and memory limits (see
src/docker-compose.yml). - Dataset sizes are identical across backends at each benchmark tier.
- The Tinybench harness, operation labels, and success validation are shared verbatim between implementations.
The specs themselves seed synthetic data, execute a consistent set of CRUD-style operations with Tinybench, and print summarised throughput/latency statistics.
All benchmark sources live in src/__tests__. They must be transpiled to
dist/__tests__ before AVA consumes them; the provided pnpm scripts wrap this
flow so you rarely have to invoke tsc directly.
- Node.js 18.18+ (the project uses ES modules and top-level
await). - Local or remote database instances that match the connection details in your
environment (see below). A
docker-composefile is available undersrc/docker-compose.ymlfor quick local provisioning.
pnpm install # install dependencies
pnpm test # build the TypeScript sources and run all benchmarksRunning pnpm test performs a TypeScript build (pnpm run build →
tsc --project tsconfig.json)
and then invokes AVA against dist/__tests__/**/*.spec.js. Each suite attempts
to connect to its backend and will log a skip message (rather than fail) if the
service is unreachable or the optional driver is not installed.
| Spec | Backend | Highlights |
|---|---|---|
bench-eventdbx.spec.ts |
EventDBX control API | Exercises the eventdbxjs client with a fixed auth token. |
bench-postgres.spec.ts |
PostgreSQL | Seeds zero-padded string aggregate IDs so ordering works without casts. |
bench-mongodb.spec.ts |
MongoDB | Bulk inserts synthetic documents and benchmarks common event-store operations. |
bench-mssql.spec.ts |
Microsoft SQL Server | Detects ETIMEOUT responses and treats them as skips rather than hard failures. |
bench-shared.ts |
Shared utilities | Dataset definitions, optional module loader, formatting helpers, etc. |
Default dataset sizes are defined in bench-shared.ts as
[1_000, 10_000, 100_000, 1_000_000].
You can limit AVA to individual suites (or specific benchmark names) by calling it directly or by forwarding arguments through the pnpm script:
# Build once, then run only the PostgreSQL benchmarks
pnpm run build
pnpm exec ava "dist/__tests__/bench-postgres.spec.js"
# Same suite, but constrain to operations whose labels contain "apply"
pnpm exec ava "dist/__tests__/bench-postgres.spec.js" --match="*apply*"While iterating, you can also use AVA’s watch mode (pnpm exec ava --watch ...)
in combination with pnpm exec tsc --watch if you prefer continuous feedback.
Benchmarks execute a shared set of read and write operations by default. You can
toggle the workload with the BENCH_RUN_MODE environment variable (alias:
BENCH_MODE). Set it inline, or add it to your project’s .env file (the harness
loads .env automatically):
# Only measure write-heavy tasks (apply/create/archive/restore/patch)
BENCH_RUN_MODE=write-only pnpm test
# Focus on read paths (list/get/select/events)
BENCH_RUN_MODE=read-only pnpm exec ava "dist/__tests__/bench-mongodb.spec.js"Accepted values are all (default), read / read-only, and write /
write-only. When a mode excludes an operation, the suite logs a skip message
instead of running the task, so summaries only include the operations that match
the selected workload.
EventDBX automatically generates a token on first start; you can retrieve it by reading the cli.token file.
docker compose -f src/docker-compose.yml exec eventdbx sh -c 'cat /var/lib/eventdbx/.eventdbx/cli.token'bench-shared.ts loads environment variables once via dotenv/config. A sample
.env lives at the project root and already mirrors the defaults shown below;
feel free to copy/modify it for your environment.
| Variable | Default value | Description |
|---|---|---|
EVENTDBX_MSSQL_CONN |
Server=localhost;User Id=SA;Password=<Password>;TrustServerCertificate=True |
Connection string for MSSQL. |
EVENTDBX_MONGO_URI |
mongodb://localhost:27017 |
MongoDB connection URI. |
EVENTDBX_MONGO_DB |
bench |
Mongo database name. |
EVENTDBX_MONGO_COLLECTION |
events |
Events collection name. |
EVENTDBX_MONGO_AGGREGATE_COLLECTION |
events_aggregates |
Aggregates collection name. |
EVENTDBX_PG_DSN |
postgresql://bench:bench@localhost:5432/bench |
PostgreSQL DSN consumed by pg.Pool. |
EVENTDBX_TEST_IP |
127.0.0.1 |
EventDBX control client host. |
EVENTDBX_TEST_PORT |
6363 |
EventDBX control client port. |
EVENTDBX_TEST_TOKEN |
static JWT in repo | Authentication token for the control client. |
BENCH_RUN_MODE / BENCH_MODE |
all |
Operation mix (all, read, write). |
BENCH_OPERATION_LIMIT |
100 |
Shared page/event window size. |
BENCH_DATASET_SIZES |
1,10000,100000 |
Comma/space-separated dataset sizes. |
To spin up local dependencies quickly, use the provided compose file (passwords match the defaults above):
docker compose -f src/docker-compose.yml up -d postgres mongodb mssqlRemember to tear the stack down afterwards (docker compose down).
- Timeouts when talking to MSSQL: the harness treats
ETIMEOUTresponses as skips and logs the failure. Double-check that port1433is reachable and the SA password matchesEVENTDBX_MSSQL_CONNif you expect the benchmark to run. Cannot find module 'pg'(or similar): optional drivers are loaded at runtime. Install the relevant package (npm install pg,npm install mssql, etc.) or unset the corresponding environment variables to skip that backend.- AVA/npm operation timeouts: if long-running suites exceed the default
timeout, re-run with a larger budget (for example
AVA_TIMEOUT=10m pnpm test --timeout=10m) so AVA’s watchdog doesn’t abort the process mid-run. - Postgres progress parser warnings about casts: aggregate IDs are seeded as zero-padded strings and queries order lexicographically. If you still see warnings, ensure you are running the updated suite (no explicit casts remain).
The benchmark specs delete existing synthetic data before seeding, but long running sessions can still leave behind large datasets. Clean up per-backend data directories as needed to reclaim space.