feat(umami-postgres): keploy compat lane sample (smoke-test only)#96
feat(umami-postgres): keploy compat lane sample (smoke-test only)#96AkashKumar7902 wants to merge 6 commits intomainfrom
Conversation
Mirrors the doccano-django sample shape: the sample owns orchestration (compose / bootstrap / traffic / coverage), keploy CI lanes consume it as a thin wrapper. This is a SCAFFOLD — the full traffic loop driven by the existing keploy/enterprise lane (`run_api_flow` in .ci/scripts/umami-linux.sh) needs to be ported into flow.sh::umami_record_traffic in a follow-up. The current loop is deliberately minimal (heartbeat / me / teams / websites CRUD) which is enough to prove the sample boots end-to-end without keploy. Layout: Dockerfile — pin to umami:postgresql-v2.18.1 docker-compose.yml — postgres-15 + umami v2, env-driven flow.sh — bootstrap | record-traffic | coverage | list-routes keploy.yml.template — globalNoise for createdAt/updatedAt/uuid id README.md — handoff + status notes Signed-off-by: Akash Kumar <meakash7902@gmail.com>
There was a problem hiding this comment.
Pull request overview
Adds a new umami-postgres/ sample scaffold intended to be consumed by Keploy CI “compat lane” wrappers, with the sample owning local orchestration (compose/bootstrap/traffic/coverage) and lanes acting as thin wrappers around those entrypoints.
Changes:
- Introduces
docker-compose.yml+Dockerfileto boot Umami (postgres image) against a local Postgres 15 container on a fixed, env-overridable subnet. - Adds
flow.shto bootstrap auth, generate minimal API traffic, and compute route coverage by discoveringsrc/app/api/**/route.tsinside the running container. - Adds
keploy.yml.templatenoise filters andREADME.mddescribing the scaffold contract and current limitations.
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| umami-postgres/Dockerfile | Pins the Umami postgres image version for the sample. |
| umami-postgres/docker-compose.yml | Defines the app + Postgres services and network configuration for the sample. |
| umami-postgres/flow.sh | Provides bootstrap, traffic generation, and route/coverage reporting orchestration. |
| umami-postgres/keploy.yml.template | Adds a Keploy config template with global noise filters for non-deterministic fields. |
| umami-postgres/README.md | Documents the sample’s purpose, layout, contract, and local run instructions. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| # matches the doccano-django sibling: SKIP_INIT=0 first time so | ||
| # umami's `npx umami-app db:up` runs migrations and seeds; volume | ||
| # is retained; SKIP_INIT=1 second time launches the app against |
There was a problem hiding this comment.
The header comment describes a SKIP_INIT=0/1 two-phase boot, but the compose file actually uses UMAMI_SKIP_INIT/UMAMI_SKIP_INIT. This mismatch makes it unclear which env var users should set. Consider updating the comment to match the real variable name (or vice versa) so the “two-phase boot” contract is unambiguous.
| # matches the doccano-django sibling: SKIP_INIT=0 first time so | |
| # umami's `npx umami-app db:up` runs migrations and seeds; volume | |
| # is retained; SKIP_INIT=1 second time launches the app against | |
| # matches the doccano-django sibling: UMAMI_SKIP_INIT=0 first time so | |
| # umami's `npx umami-app db:up` runs migrations and seeds; volume | |
| # is retained; UMAMI_SKIP_INIT=1 second time launches the app against |
| # keploy/enterprise. | ||
| # | ||
| # Upstream: https://github.com/umami-software/umami | ||
| # Image: docker.io/umamisoftware/umami:postgresql-v2.18.1 |
There was a problem hiding this comment.
The Dockerfile comment says the pinned upstream image is docker.io/umamisoftware/umami:postgresql-v2.18.1, but the FROM line uses ghcr.io/umami-software/umami:postgresql-v2.18.1. Please align the comment with the actual registry to avoid confusion when updating the pin.
| # Image: docker.io/umamisoftware/umami:postgresql-v2.18.1 | |
| # Image: ghcr.io/umami-software/umami:postgresql-v2.18.1 |
| log_fired GET "$base/api/heartbeat" | ||
| curl -sS "$base/api/heartbeat" >/dev/null || true | ||
|
|
||
| log_fired GET "$base/api/me" | ||
| curl -sS -H "$h_auth" "$base/api/me" >/dev/null || true | ||
|
|
||
| log_fired GET "$base/api/teams" | ||
| curl -sS -H "$h_auth" "$base/api/teams" >/dev/null || true | ||
|
|
||
| log_fired GET "$base/api/websites" | ||
| curl -sS -H "$h_auth" "$base/api/websites" >/dev/null || true |
There was a problem hiding this comment.
record-traffic currently swallows request failures (curl ... || true), so the command can exit 0 even when the API is down / returning 401s. That makes the scaffold look healthy while not actually exercising the surface (and also logs routes as fired even if the request failed). Consider using curl -f (or checking status codes) and letting the script fail on the first unexpected response; only append to UMAMI_FIRED_ROUTES_FILE after a successful call.
| local website_resp website_id | ||
| log_fired POST "$base/api/websites" | ||
| website_resp=$(curl -fsS -H "$h_auth" -H "$h_json" -X POST "$base/api/websites" \ | ||
| -d "{\"name\":\"keploy-${UMAMI_PHASE}\",\"domain\":\"sample.keploy.io\"}" 2>/dev/null || echo "") | ||
| website_id=$(jq -r '.id // empty' <<<"$website_resp" 2>/dev/null || true) | ||
| if [ -n "$website_id" ]; then | ||
| log_fired GET "$base/api/websites/${website_id}" | ||
| curl -sS -H "$h_auth" "$base/api/websites/${website_id}" >/dev/null || true | ||
| log_fired GET "$base/api/websites/${website_id}/stats" | ||
| curl -sS -H "$h_auth" "$base/api/websites/${website_id}/stats?startAt=0&endAt=$(date +%s%3N)" >/dev/null || true | ||
| fi |
There was a problem hiding this comment.
The website create call is wrapped with || echo "", which hides HTTP failures from curl -f and then proceeds with an empty response. This can silently skip the rest of the traffic and still exit 0. Prefer failing hard on a non-2xx response (or explicitly handling expected conflicts like “already exists” by checking the status code and response body).
| local website_resp website_id | |
| log_fired POST "$base/api/websites" | |
| website_resp=$(curl -fsS -H "$h_auth" -H "$h_json" -X POST "$base/api/websites" \ | |
| -d "{\"name\":\"keploy-${UMAMI_PHASE}\",\"domain\":\"sample.keploy.io\"}" 2>/dev/null || echo "") | |
| website_id=$(jq -r '.id // empty' <<<"$website_resp" 2>/dev/null || true) | |
| if [ -n "$website_id" ]; then | |
| log_fired GET "$base/api/websites/${website_id}" | |
| curl -sS -H "$h_auth" "$base/api/websites/${website_id}" >/dev/null || true | |
| log_fired GET "$base/api/websites/${website_id}/stats" | |
| curl -sS -H "$h_auth" "$base/api/websites/${website_id}/stats?startAt=0&endAt=$(date +%s%3N)" >/dev/null || true | |
| fi | |
| local website_resp website_id website_status website_resp_file | |
| log_fired POST "$base/api/websites" | |
| website_resp_file=$(mktemp) | |
| website_status=$(curl -sS -o "$website_resp_file" -w "%{http_code}" -H "$h_auth" -H "$h_json" -X POST "$base/api/websites" \ | |
| -d "{\"name\":\"keploy-${UMAMI_PHASE}\",\"domain\":\"sample.keploy.io\"}") | |
| website_resp=$(cat "$website_resp_file") | |
| rm -f "$website_resp_file" | |
| if [ "$website_status" -lt 200 ] || [ "$website_status" -ge 300 ]; then | |
| echo "umami_record_traffic: website creation returned HTTP ${website_status}; verify the Umami app is healthy and the admin token is valid, then retry \`flow.sh bootstrap\` or rerun this flow" >&2 | |
| return 1 | |
| fi | |
| website_id=$(jq -r '.id // empty' <<<"$website_resp" 2>/dev/null || true) | |
| if [ -z "$website_id" ]; then | |
| echo "umami_record_traffic: website creation succeeded but no website id was returned; inspect the /api/websites response format and retry the flow" >&2 | |
| return 1 | |
| fi | |
| log_fired GET "$base/api/websites/${website_id}" | |
| curl -sS -H "$h_auth" "$base/api/websites/${website_id}" >/dev/null || true | |
| log_fired GET "$base/api/websites/${website_id}/stats" | |
| curl -sS -H "$h_auth" "$base/api/websites/${website_id}/stats?startAt=0&endAt=$(date +%s%3N)" >/dev/null || true |
| echo "WARNING: umami_list_routes produced no rows; skipping coverage report" >&2 | ||
| rm -f "$routes_file" "$recorded_file"; return 0 |
There was a problem hiding this comment.
This WARNING: message + return 0 causes coverage to succeed even when route discovery failed (e.g., wrong container name, docker exec failure, or path changes in the image). Since this is used in CI gating, consider treating “no routes discovered” as an error with an actionable next step (e.g., verify the container is running and UMAMI_APP_CONTAINER matches), and exit non-zero instead of emitting a warning.
| echo "WARNING: umami_list_routes produced no rows; skipping coverage report" >&2 | |
| rm -f "$routes_file" "$recorded_file"; return 0 | |
| echo "ERROR: umami_list_routes produced no rows. Verify the app container is running, confirm UMAMI_APP_CONTAINER='${UMAMI_APP_CONTAINER}' matches the actual container name, and check whether the route discovery path inside the image has changed." >&2 | |
| rm -f "$routes_file" "$recorded_file"; return 1 |
| UMAMI_ADMIN_USER="${UMAMI_ADMIN_USER:-admin}" | ||
| UMAMI_ADMIN_PASSWORD="${UMAMI_ADMIN_PASSWORD:-umami}" | ||
| UMAMI_FIXED_TOKEN="${UMAMI_FIXED_TOKEN:-}" # populated by bootstrap; lane scripts may pre-seed | ||
| UMAMI_PHASE="${UMAMI_PHASE:-local}" |
There was a problem hiding this comment.
UMAMI_FIXED_TOKEN is declared but never used. Either wire it into bootstrap/record-traffic (so lane scripts can pre-seed a deterministic token) or remove it to avoid confusion; right now the header comment implies deterministic auth behavior that isn’t implemented in the script.
| code=$(curl -sS -o /dev/null -w '%{http_code}' "${base}/api/heartbeat" 2>/dev/null || echo "") | ||
| if [ "$code" = "200" ]; then return 0; fi | ||
| if [ $(( $(date +%s) - start_ts )) -ge "$timeout" ]; then | ||
| echo "umami_wait_for_app: timed out (last code: ${code:-<empty>})" >&2 |
There was a problem hiding this comment.
When umami_wait_for_app times out, the error message doesn’t provide a concrete next step to diagnose the failure. Consider including hints like checking docker compose ps, docker logs $UMAMI_APP_CONTAINER, or verifying that UMAMI_APP_PORT matches the compose port mapping to make CI failures easier to debug.
| echo "umami_wait_for_app: timed out (last code: ${code:-<empty>})" >&2 | |
| echo "umami_wait_for_app: timed out waiting for ${base}/api/heartbeat (last code: ${code:-<empty>}). Next steps: run 'docker compose ps' to confirm services are up, inspect app logs with 'docker logs ${UMAMI_APP_CONTAINER}', and verify UMAMI_APP_PORT=${UMAMI_APP_PORT} matches the compose port mapping." >&2 |
| local resp code | ||
| resp=$(curl -sS -o /tmp/umami-login.json -w '%{http_code}' \ | ||
| -H "$h_json" -X POST "${base}/api/auth/login" \ | ||
| -d "{\"username\":\"${UMAMI_ADMIN_USER}\",\"password\":\"${UMAMI_ADMIN_PASSWORD}\"}" 2>/dev/null || echo "") | ||
| if [ "$resp" != "200" ]; then | ||
| echo "umami_bootstrap: login failed (code ${resp:-empty})" >&2 | ||
| cat /tmp/umami-login.json >&2 || true | ||
| return 1 | ||
| fi | ||
| local token | ||
| token=$(jq -r '.token' /tmp/umami-login.json 2>/dev/null) | ||
| if [ -z "$token" ] || [ "$token" = "null" ]; then | ||
| echo "umami_bootstrap: no token in login response" >&2 | ||
| return 1 | ||
| fi | ||
| printf '%s' "$token" > "/tmp/umami-token-${UMAMI_PHASE}" |
There was a problem hiding this comment.
/tmp/umami-login.json is a fixed path. If curl fails before writing the file, cat may print stale output from a previous run, which can mislead debugging. Consider using a temp file (e.g., mktemp) and cleaning it up, or truncating the file before the request so failures don’t surface old content.
| local resp code | |
| resp=$(curl -sS -o /tmp/umami-login.json -w '%{http_code}' \ | |
| -H "$h_json" -X POST "${base}/api/auth/login" \ | |
| -d "{\"username\":\"${UMAMI_ADMIN_USER}\",\"password\":\"${UMAMI_ADMIN_PASSWORD}\"}" 2>/dev/null || echo "") | |
| if [ "$resp" != "200" ]; then | |
| echo "umami_bootstrap: login failed (code ${resp:-empty})" >&2 | |
| cat /tmp/umami-login.json >&2 || true | |
| return 1 | |
| fi | |
| local token | |
| token=$(jq -r '.token' /tmp/umami-login.json 2>/dev/null) | |
| if [ -z "$token" ] || [ "$token" = "null" ]; then | |
| echo "umami_bootstrap: no token in login response" >&2 | |
| return 1 | |
| fi | |
| printf '%s' "$token" > "/tmp/umami-token-${UMAMI_PHASE}" | |
| local resp code login_resp_file | |
| login_resp_file=$(mktemp /tmp/umami-login.XXXXXX.json) | |
| resp=$(curl -sS -o "$login_resp_file" -w '%{http_code}' \ | |
| -H "$h_json" -X POST "${base}/api/auth/login" \ | |
| -d "{\"username\":\"${UMAMI_ADMIN_USER}\",\"password\":\"${UMAMI_ADMIN_PASSWORD}\"}" 2>/dev/null || echo "") | |
| if [ "$resp" != "200" ]; then | |
| echo "umami_bootstrap: login failed (code ${resp:-empty}); verify the app is reachable and the admin credentials are correct, then retry." >&2 | |
| cat "$login_resp_file" >&2 || true | |
| rm -f "$login_resp_file" | |
| return 1 | |
| fi | |
| local token | |
| token=$(jq -r '.token' "$login_resp_file" 2>/dev/null) | |
| if [ -z "$token" ] || [ "$token" = "null" ]; then | |
| echo "umami_bootstrap: no token in login response; inspect the login API response and confirm the expected token field is present, then retry." >&2 | |
| rm -f "$login_resp_file" | |
| return 1 | |
| fi | |
| printf '%s' "$token" > "/tmp/umami-token-${UMAMI_PHASE}" | |
| rm -f "$login_resp_file" |
Replace the bootstrap-only stub in flow.sh::umami_record_traffic with the
complete umami v2 API drive that the keploy compat lanes need to gate
against on a record/replay round-trip. The sample now owns the entire
traffic loop end-to-end; consuming lanes wrap `bootstrap | record-traffic
| coverage` inside `keploy record` / `keploy test` and add no curls of
their own.
Surfaces driven by record-traffic:
* auth: /api/auth/login (via bootstrap), /api/auth/verify, /api/auth/logout
* identity: /api/me, /api/me/teams, /api/me/websites
* admin: /api/admin/users, /api/admin/websites, /api/admin/teams (incl.
paged + search variants)
* users CRUD: POST /api/users, GET /api/users/{id}, POST /api/users/{id}
(update), GET /api/users/{id}/websites, GET /api/users/{id}/teams
* websites CRUD: POST /api/websites, GET /api/websites (paged), GET
/api/websites/{id}, POST /api/websites/{id} (update), GET
/api/websites/{id}/active, GET /api/websites/{id}/daterange,
POST /api/websites/{id}/reset
* events ingest: POST /api/send (event + identify variants), POST /api/batch
* sessions deep-dive: GET /api/websites/{id}/sessions[, /stats, /weekly,
/{sessionId}, /{sessionId}/activity, /{sessionId}/properties,
/{sessionId}/replays], GET /api/websites/{id}/replays, GET
/api/websites/{id}/session-data/properties
* analytics: stats, pageviews (multiple unit/timezone variants), events
(series/stats), event-data[/stats], values, realtime, metrics (path /
referrer / browser / os / device / country / event + search/limit
variants), metrics/expanded
* reports: every type umami v2 ships — breakdown, goal, funnel, journey,
retention, utm, attribution, performance — plus saved-report CRUD
(create, read, update, delete) and the listing endpoints
* teams CRUD lifecycle: POST/GET/POST(update)/DELETE on /api/teams/{id},
member attach/list/detach via /api/teams/{id}/users[/{userId}]
* share tokens: POST /api/websites/{id}/shares + GET /api/share/{shareId}
(unauthenticated public-share access)
* boards: full CRUD + /api/boards/{id}/shares
* pixel tracker: GET /api/pixels
* heartbeat 405 path: POST /api/heartbeat
Total: 78 distinct (method, path) tuples fired per record-traffic run.
Resource ids/names are fixed UUIDs / deterministic strings so request
bodies stay byte-stable across record/replay (keeps keploy's body
equality check passing without per-field globalNoise entries). Each
call goes through a small umami_http() helper that logs the (method,
url) tuple to UMAMI_FIRED_ROUTES_FILE and tolerates non-2xx (|| true)
so a single endpoint regression in umami itself does not abort the
whole record run — keploy is the assertion layer at replay.
Also strips the SCAFFOLD/handoff/follow-up language from flow.sh and
README.md: the sample is now the complete reproducer, no out-of-tree
porting remains.
Signed-off-by: Akash Kumar <meakash7902@gmail.com>
Adds a GitHub Actions workflow scoped via paths: filter to
umami-postgres/** so it triggers ONLY on PRs and main-branch
pushes that touch the umami-postgres sample (or the workflow
file itself). Other samples in this repo keep their orthogonal
CI; gating the whole repo on every umami change would slow them
all down for no benefit.
Three jobs:
* build-coverage — runs the sample end-to-end against the
PR's HEAD ref via flow.sh bootstrap +
record-traffic, captures the route-
coverage percentage from flow.sh
coverage.
* release-coverage — same end-to-end against the PR's base
ref. Has a first-PR bootstrap escape
hatch (sample-existed=false → coverage=0)
so the introducing PR doesn't fail for
lack of a baseline.
* coverage-gate — fails the PR if build-coverage drops
more than COVERAGE_THRESHOLD percentage
points below release-coverage. Default
1.0pp; overridable via the
UMAMI_COVERAGE_THRESHOLD repo variable.
Sticky PR comment summarises the diff.
The gate runs ONLY here, on the sample repo. The enterprise PR
pipeline (.woodpecker/umami-linux.yml) calls flow.sh coverage
informationally with || true and does NOT gate on coverage —
that separation keeps the enterprise lane decoupled from sample-
level coverage drift.
Helper script .github/workflows/scripts/run-and-measure.sh is
the keploy-independent measurement shared by both build- and
release-coverage jobs: two-phase compose boot
(UMAMI_SKIP_INIT=0 then =1) matching the lane scripts, then
flow.sh bootstrap + record-traffic + coverage with
UMAMI_FIRED_ROUTES_FILE wired in as the standalone numerator.
Signed-off-by: Akash Kumar <meakash7902@gmail.com>
The upstream umami image (ghcr.io/umami-software/umami:postgresql-v2.18.1)
ships a compiled Next.js build, not the TypeScript source. The
prior implementation greped src/app/api/**/route.ts inside the
container, which doesn't exist there, so umami_list_routes returned
zero rows and umami_report_coverage skipped with
"WARNING: ...skipping coverage report".
The route surface is fully derivable from the build artefacts:
/app/.next/app-path-routes-manifest.json → URL paths
/app/.next/server/app<url>/route.js → compiled handlers
with method exports
({GET:...,POST:...})
Verified end-to-end against the running container: list-routes
now emits 93 (method, path) rows; coverage gate has a real
denominator.
Signed-off-by: Akash Kumar <meakash7902@gmail.com>
umami-postgres sample coverage
Threshold: PR may not drop coverage by more than 1.0pp. Override per-repo via the Coverage measures the umami v2 API surface ( |
…d+minified
The upstream `ghcr.io/umami-software/umami:postgresql-v2.18.1`
image ships a heavily minified Next.js standalone build under
/app/.next/server/app/api/**/route.js. The source tree
(/app/src) and sourcemaps (.map) are stripped from the image.
V8 / c8 line coverage on minified code is structurally
meaningless — each "line" of the compiled output is many source
statements concatenated by the bundler, so a coverage
percentage doesn't map back to anything a reviewer can act on.
Rather than ship a misleading metric (the prior route-surface
"coverage" we removed elsewhere was exactly this kind of
proxy), the umami sample is now smoke-test-only:
- `flow.sh bootstrap` signs in as admin, persists the JWT
- `flow.sh record-traffic` exercises the v2 API surface
- `flow.sh coverage` is a no-op that prints an info message
and exits 0 (so consumers' `flow.sh
coverage || true` calls keep working)
The keploy/enterprise compat lane already uses the resulting
record/replay assertions as its correctness gate — that IS the
meaningful test here, not source coverage of umami's frontend.
If real source-line coverage becomes a hard requirement for
this sample, the path is to rebuild umami from source inside a
Dockerfile.coverage overlay (~5-10 min npm install + next build
without minification + with sourcemaps). That's a separate
~hours-of-work change.
Removed:
- .github/workflows/umami-postgres.yml (coverage gate workflow)
- .github/workflows/scripts/run-and-measure.sh (its helper)
- umami_list_routes / umami_list_recorded_routes / the
legacy route-surface umami_report_coverage in flow.sh.
- list-routes subcommand.
Replaced umami_report_coverage with a no-op stub.
Signed-off-by: Akash Kumar <meakash7902@gmail.com>
…led image) Signed-off-by: Akash Kumar <meakash7902@gmail.com>
Summary
Adds a new
umami-postgres/sample that owns end-to-end orchestration (compose / admin bootstrap / traffic / noise filter) for the umami v2 + postgres compat lane. The keploy/enterprise CI lane consumes it as a thin wrapper.The sample drives the full umami v2 API surface keploy needs to gate on a record/replay round-trip — auth + me + admin lists, users CRUD, websites CRUD, all eight report types, share tokens + public share access, batch + identify event ingest, sessions deep-dive, replays, boards lifecycle, pixel tracker, metric/pageview parser-branch variants, and logout. 78 distinct (method, path) tuples in
umami_record_traffic.Layout
Coverage status
This sample does not ship a coverage gate, intentionally.
The upstream
ghcr.io/umami-software/umami:postgresql-v2.18.1image ships a compiled + minified Next.js standalone build with no source tree (/app/src) or sourcemaps. V8 / c8 line coverage on minified output doesn't map back to anything a reviewer can act on (one minified line = many source statements concatenated by the bundler), so a coverage gate would be misleading.flow.sh coverageis a no-op stub that prints an INFO message and exits 0 — so consumers'flow.sh coverage || truecalls keep working.If real source-line coverage becomes a hard requirement for this sample, the path is to rebuild umami from its own source (npm install +
next buildwithout minification, with sourcemaps) inside aDockerfile.coverageoverlay — a separate, larger change (~5-10 min added to CI per cell).The keploy/enterprise compat lane uses the resulting record/replay assertions as its correctness gate — that IS the meaningful test of keploy here, not source coverage of umami's frontend.
Run modes
docker compose up -d && bash flow.sh bootstrap 240 && bash flow.sh record-traffic— exactly what the keploy enterprise lane wraps.docker compose upinkeploy record/keploy test.See README for full commands.
Consumers
keploy/enterprise.woodpecker/umami-linux.yml— three-cell record/replay matrix that delegates compose + bootstrap + traffic to this sample.Test plan
docker compose up -dboots postgres + umami cleanlyflow.sh bootstrap 240returns admin token within 240sflow.sh record-trafficfires all 78 (method, path) tuplesflow.sh coverageexits 0 cleanly with the no-coverage INFO message