A Cloudflare-native countdown scheduler for CL Motorsport team.
- UI: React 19 + TanStack Router + TypeScript + Tailwind CSS, built with Vite 7 and deployed through Cloudflare Pages. Package management via Bun 1.2+, targeting Node.js 22 for Cloudflare runtime parity.
- API: Cloudflare Worker (modules syntax) exposing RESTful endpoints for session CRUD.
- State coordinator: Single Cloudflare Durable Object (
CountdownDurableObject) responsible for managing allCountdownSessions with in-memory caching and D1-backed persistence. - Persistence: Cloudflare D1 stores snapshots (
countdown_state) and an append-only event log (events) for durability and cold-start recovery.
web/– Vite + React + Tailwind frontend that visualizes countdown data and connects to the Worker API.worker/– Cloudflare Worker with aCountdownDurableObject, REST surface (/api/sessions/...), and D1 bindings for snapshots + audit events.
- User creates/edits countdown sessions via the React UI.
- UI calls the Worker HTTP API. Worker routes to the Durable Object.
- Durable Object validates mutations, updates its in-memory
sessionslist, syncs changes to D1 (snapshot plus append-onlyeventsrow), and notifies any connected clients. - Clients maintain a WebSocket/SSE subscription; the DO emits authoritative timestamps roughly every second so all tabs show the same remaining time.
- DO alarms wake up at exact session boundaries to flip statuses (
scheduled → running → complete) and immediately start the next session (the "plan ahead" chain).
Pages (React UI) ──HTTP/WebSocket──▶ Worker Router ──fetchStub──▶ Countdown Durable Object
│
└────SQL────▶ D1 (snapshots + events)
| Field | Type | Notes |
|---|---|---|
sessionId |
string (UUID) | Unique identifier. |
label |
string | e.g., "Qualifying". |
startTimeUtc |
string (ISO) | UTC start timestamp; primary ordering key. |
durationMs |
number | Duration in milliseconds (deterministic end = start + duration). |
status |
enum | scheduled, running, complete, canceled. |
metadata |
JSON | Optional notes (track, stream URL, etc.). |
countdown_state (
id TEXT PRIMARY KEY DEFAULT 'default',
version INTEGER NOT NULL,
snapshot TEXT NOT NULL, -- JSON blob of DO state
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
);
events (
event_id TEXT PRIMARY KEY,
session_id TEXT NOT NULL,
action TEXT NOT NULL,
payload TEXT, -- nullable JSON
occurred_at TEXT NOT NULL
);Durable Object appends to events for every mutation and periodically refreshes countdown_state.snapshot. Cold starts replay snapshot + subsequent events to rebuild memory.
Requires Node.js 22 (run
nvm use 22from the repo root to sync with.nvmrc) and Bun 1.2+ for package management (curl -fsSL https://bun.sh/install | bash).
cd web
bun install
bun run dev # starts Vite on http://localhost:5173The UI connects to the Worker API via Vite's proxy (configured in vite.config.ts).
cd worker
bun install
bunx wrangler d1 create countdown-db --binding COUNTDOWN_DB # once per account
bunx wrangler dev --local --persist-to=./.wrangler # runs the Worker + Durable Object locally| Method | Path | Description |
|---|---|---|
GET |
/health |
Simple readiness check. |
GET |
/api/sessions |
List all sessions. |
POST |
/api/sessions |
Create a new session. Body: { "label": "Name", "startTimeUtc": "...", "durationMs": 1800000 }. |
GET |
/api/sessions/:sessionId |
Get a specific session. |
PATCH |
/api/sessions/:sessionId |
Update a session. |
DELETE |
/api/sessions/:sessionId |
Delete a session. |
Example:
# Create a session
curl -X POST http://localhost:8787/api/sessions \
-H "content-type: application/json" \
-d '{"label":"Warm-up","startTimeUtc":"2026-01-01T13:00:00.000Z","durationMs":1800000}'
# List all sessions
curl http://localhost:8787/api/sessionsRun the following once after bunx wrangler d1 create countdown-db to create the tables in your local database:
# Create countdown_state table
bunx wrangler d1 execute countdown-db --local --persist-to=./.wrangler --command "CREATE TABLE IF NOT EXISTS countdown_state (id TEXT PRIMARY KEY DEFAULT 'default', version INTEGER NOT NULL, snapshot TEXT NOT NULL, created_at TEXT NOT NULL, updated_at TEXT NOT NULL);"
# Create events table
bunx wrangler d1 execute countdown-db --local --persist-to=./.wrangler --command "CREATE TABLE IF NOT EXISTS events (event_id TEXT PRIMARY KEY, session_id TEXT NOT NULL, action TEXT NOT NULL, payload TEXT, occurred_at TEXT NOT NULL);"Important: The
--local --persist-to=./.wranglerflags ensure the tables are created in the same local database thatwrangler dev --local --persist-to=./.wrangleruses. Without these flags, the tables may be created in a different location.To apply the schema to the remote (deployed) database, use
--remoteinstead:bunx wrangler d1 execute countdown-db --remote --command "CREATE TABLE ..."
- Cloudflare account with access to Workers, D1, Durable Objects, and Pages.
- Bun 1.2+ installed locally (
curl -fsSL https://bun.sh/install | bash). - Logged in with Wrangler:
bunx wrangler login(opens a browser, stores OAuth token locally). - Verify access:
bunx wrangler whoamishould print the target account ID.
-
D1 database (once per account):
cd worker bunx wrangler d1 create countdown-dbWrangler outputs a
database_id; update thedatabase_idfield inworker/wrangler.jsoncunder thed1_databasesbinding. -
Schema: run the following to create the required tables on the remote database:
# Create countdown_state table bunx wrangler d1 execute countdown-db --remote --command "CREATE TABLE IF NOT EXISTS countdown_state (id TEXT PRIMARY KEY DEFAULT 'default', version INTEGER NOT NULL, snapshot TEXT NOT NULL, created_at TEXT NOT NULL, updated_at TEXT NOT NULL);" # Create events table bunx wrangler d1 execute countdown-db --remote --command "CREATE TABLE IF NOT EXISTS events (event_id TEXT PRIMARY KEY, session_id TEXT NOT NULL, action TEXT NOT NULL, payload TEXT, occurred_at TEXT NOT NULL);"
-
Durable Object migration: the first
wrangler deployautomatically registers theCountdownDurableObjectbecausewrangler.jsoncincludes thenew_sqlite_classesmigration tag (v2). No manual step needed.
cd worker
bun install
bun run deploy # alias for `wrangler deploy`Use bunx wrangler tail to stream Worker logs after deploy.
Manual deploy (recommended for initial setup)
cd web
bun install
VITE_API_URL=https://countdown-worker.<your-subdomain>.workers.dev bun run build
bunx wrangler pages deploy dist --project-name <your-pages-project>- Replace
<your-pages-project>with your Pages project name (created on first run if it doesn't exist). - Set
VITE_API_URLto your deployed Worker URL so the frontend knows where to send API requests.
Dashboard-driven deploy (CI alternative)
- Create a Pages project in the Cloudflare dashboard pointing at your repository.
- Set root directory to
web/, build command tobun run build, and output directory todist. - Add environment variable
VITE_API_URLset to your Worker URL. - Pushes to the production branch trigger automatic builds.
# Health check
curl https://<worker-domain>/health
# Expected: { "status": "ok" }
# Create a test session
curl -X POST https://<worker-domain>/api/sessions \
-H 'content-type: application/json' \
-d '{"label":"Test","startTimeUtc":"2026-01-01T00:00:00Z","durationMs":3600000}'
# List sessions
curl https://<worker-domain>/api/sessionsVisit the Cloudflare Pages URL to confirm the UI loads and connects to the Worker.
- Worker updates: re-run
bun run deployfromworker/after code changes. - Frontend updates: rebuild and redeploy via
wrangler pages deployor push to trigger dashboard CI. - Coordinated releases: deploy the Worker first (maintains API compatibility), then the Pages build.
- Database backups:
bunx wrangler d1 export countdown-db --remote --output backup.sqlbefore major changes.
If you see errors like:
Cannot apply new_sqlite_classes migration to existing class CountdownDurableObject
or
Cannot apply deleted_classes migration to non-existent class ...
This happens when Wrangler's local or remote migration state is out of sync with your wrangler.jsonc migrations.
Solution:
- Clear local Wrangler state:
rm -rf worker/.wrangler - If the error persists, consolidate your migrations in
wrangler.jsonc. For a fresh deployment, you can combine multiple migration steps into one (e.g., usenew_sqlite_classesdirectly instead ofnew_classesfollowed by a separate SQLite migration). - Remove any
deleted_classesmigrations that reference classes that were never deployed.
Important: Only consolidate migrations if you haven't deployed them to production yet. Once migrations are deployed, they become part of the permanent history.
- Architecture + data model defined
- UI scaffolded with Tailwind theme
- Worker + Durable Object scaffolded with CRUD routes and D1 sync hooks
- UI wired to Worker API (sessions CRUD)
- Countdown logic + live updates implemented
- GitHub Actions CI/CD workflow