Ethereum validator performance dashboard for the 5 tracked validators in the assignment. It fetches live Beacon chain data, computes missed attestation rewards using the Altair reward model, caches historical results locally, and shows the results in a React dashboard.
Live frontend: https://attest.frontend.vercel.app/
Repository: https://github.com/tutankhaman/attest
- Tracks 5 Ethereum validators over selectable windows:
7d,30d, and3m - Computes per-epoch missed reward breakdown for
source,target, andhead - Shows attestation effectiveness, missed ETH, failure counts, trend charts, and an epoch-by-epoch table
- Uses live Beacon RPC plus
beaconcha.inhistory endpoints - Persists a 90-day materialized cache in SQLite
- Returns explicit missing/degraded states instead of pretending incomplete data is complete
| Label | Index | Pubkey |
|---|---|---|
| V1 | 1344884 |
0x89ca023f...f4ba580fa |
| V2 | 1344886 |
0xaf6609c7...127d7bfdb |
| V3 | 1345223 |
0x8d357d15...c142aee76 |
| V4 | 1345271 |
0xb936fc73...6e1d41891 |
| V5 | 2176453 |
0xa72e6d79...f50ce74db |
The dashboard is built as a small monorepo:
frontend/: React 19 + Vite + TanStack Query + TanStack Router + Rechartsbackend/: Bun + Elysia REST APIshared/: epoch helpers, tracked validators, and API contract types
The frontend and backend are deployable independently. The frontend talks to the backend through VITE_API_URL.
- Validator summary cards
- effectiveness %
- total ETH missed
- missed ETH by
source,target,head - failure counts
- Trend chart
- per-epoch effectiveness for all 5 validators on one chart
- missing epochs are shown as gaps
- Epoch table
- sortable and filterable
- shows per-epoch flags, classification, and missed ETH
- Health and readiness state
- finalized epoch
- cache readiness for
7d,30d,90d - degraded coverage and missing-epoch reasons
One epoch is 32 slots x 12 seconds = 384 seconds, so:
7d = 1,575 epochs30d = 6,750 epochs3m = 20,250 epochs
The frontend uses a 2 epoch finality buffer and the backend enforces a maximum range of 90 days.
flowchart LR
U[User Browser] --> F[Frontend<br/>React + Vite]
F -->|GET /api/health| B[Backend API<br/>Bun + Elysia]
F -->|GET /api/performance| B
F -->|GET /api/validators| B
B --> S[(SQLite materialized cache)]
B --> R[Beacon RPC]
B --> C[beaconcha.in]
R --> B
C --> B
S --> B
flowchart TD
A[Frontend selects range] --> B[GET /api/performance]
B --> C[Read SQLite cache first]
C --> D{Coverage complete?}
D -- Yes --> E[Return full response]
D -- No --> F[Live-warm newest missing epochs]
F --> G{Enough coverage?}
G -- Yes --> H[Return degraded response with missing rows]
G -- No --> I[Return explicit error or 503]
flowchart TD
A[Backend starts] --> B[Fetch finalized checkpoint]
B --> C[Warm latest 7d first]
C --> D[Bootstrap remaining 90d in background]
D --> E[Persist per-epoch rows in SQLite]
E --> F[Serve later requests from cache]
F --> G[Append finalized sync + targeted gap repair]
This project implements the Altair missed-reward model directly in code.
For each validator and epoch:
- Load validator
effective_balance - Derive
totalActiveBalance - Compute
base_reward - Determine whether the validator earned:
TIMELY_SOURCEwith weight14TIMELY_TARGETwith weight26TIMELY_HEADwith weight14
- Compute missed reward per failed flag:
missed_reward = (base_reward * flag_weight) / 64
- Sum missed
source + target + head
Classifications used by the UI:
CORRECTMISSED_ENTIRELYWRONG_SOURCEWRONG_TARGETWRONG_HEADLATE_INCLUSION
Returns backend readiness and ingestion state.
Important fields:
status:ok | warming | degradedfinalizedEpochdashboardReadyingestion.latest7dReadyingestion.latest30dReadyingestion.latest90dReadyqueueStats
Returns validator metadata:
indexpubkeypubkeyShortbalanceeffectiveBalancestatus
Returns:
- per-validator summary cards
- per-epoch rows
- missing epoch metadata
- coverage ratio
- degraded state information
The shared response contract lives in shared/contract.ts.
- Bun
1.3.10or newer - Node.js
18+ - One Beacon RPC endpoint
- One
beaconcha.inAPI key
From the repository root:
bun installCopy the backend example file and fill the secrets:
cp backend/.env.example backend/.envMinimum backend config:
BEACON_RPC_URL="https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY"
BEACONCHAIN_API_KEY="your_free_tier_key"
PORT="3001"
ENABLE_INGESTION_WORKER=true
VALIDATOR_INDICES=1344884,1344886,1345223,1345271,2176453Create frontend/.env.local:
VITE_API_URL="http://localhost:3001"Run both apps from the monorepo root:
bun run devOr run them separately:
cd backend
bun run devcd frontend
bun run devExpected local URLs:
- frontend:
http://localhost:5173 - backend:
http://localhost:3001
| Variable | Default | Purpose |
|---|---|---|
BEACON_RPC_URL |
none | Beacon REST API base URL |
BEACONCHAIN_API_KEY |
none | beaconcha.in API key |
PORT |
3001 |
Backend port |
CACHE_DB_PATH |
epoch-cache.sqlite |
SQLite file location |
ENABLE_INGESTION_WORKER |
false |
Enable background ingestion worker |
VALIDATOR_INDICES |
empty | Comma-separated validators tracked by ingestion |
INGESTION_WARM_WINDOW_EPOCHS |
20250 |
Target warm cache size |
INGESTION_STARTUP_DELAY_MS |
0 |
Delay before startup bootstrap begins |
INGESTION_BOOTSTRAP_CHUNK_EPOCHS |
derived | Epochs processed per bootstrap chunk |
INGESTION_BACKGROUND_CHUNK_EPOCHS |
derived | Epochs processed in background backfill |
INGESTION_STARTUP_STALL_LIMIT |
3 |
Startup retry threshold |
REQUEST_WARM_EPOCH_LIMIT |
4 |
Max missing epochs warmed during a request |
LIVE_WARMUP_TIMEOUT_MS |
120000 |
Upper bound for request-time warmup |
BEACON_RPC_CONCURRENCY |
10 |
Beacon RPC concurrency |
BEACONCHAIN_CONCURRENCY |
1 |
beaconcha.in concurrency |
BEACONCHAIN_REQUEST_SPACING_MS |
5000 |
Minimum spacing between beaconcha.in requests |
RPC_EPOCH_CONCURRENCY |
10 |
Epoch-level RPC fetch concurrency |
RATE_LIMIT_EXTRA_DELAY_MS |
5000 |
Extra wait for retryable rate limits |
| Variable | Default | Purpose |
|---|---|---|
VITE_API_URL |
http://localhost:3001 |
Backend base URL |
bun run dev
bun run build
bun run check-types
bun run check
bun run fixcd backend
bun run dev
bun run test
bun run reconcile
bun run backfill --window=7d
bun run backfill --startEpoch=431000 --endEpoch=433200
bun run cache-inspect
bun run verify-implementationcd frontend
bun run dev
bun run build
bun run check-typesThe frontend is deployed on Vercel:
- Live URL: https://attest.frontend.vercel.app/
Root vercel.json:
- runs
bun run build --filter=frontend - installs with
bun install - publishes
frontend/dist
The backend is containerized with backend/Dockerfile.
Build from the repository root:
docker build -f backend/Dockerfile -t attest-backend .Run:
docker run --rm -p 3001:3001 \
-e BEACON_RPC_URL="https://your-beacon-rpc" \
-e BEACONCHAIN_API_KEY="your-api-key" \
-e ENABLE_INGESTION_WORKER=true \
-e VALIDATOR_INDICES="1344884,1344886,1345223,1345271,2176453" \
-v "$(pwd)/data:/data" \
attest-backendThe container stores SQLite data in /data/epoch-cache.sqlite by default. Mount /data if you want cache persistence across restarts.
.
├── backend/
│ ├── src/
│ │ ├── beacon/ # upstream Beacon RPC + beaconcha.in clients
│ │ ├── ingest/ # SQLite-backed materialization and workers
│ │ ├── performance/ # request path and warmup logic
│ │ ├── rewards/ # Altair reward math and classification
│ │ └── routes/ # Elysia API routes
│ ├── scripts/ # reconciliation and verification scripts
│ └── Dockerfile
├── frontend/
│ └── src/
│ ├── components/ # cards, chart, table, UI primitives
│ ├── hooks/ # health and performance queries
│ ├── routes/ # dashboard route
│ └── api/ # Eden client
├── shared/ # contract, epoch helpers, tracked validators
├── context/ # assignment, plan, architecture notes
├── dev-notes.md # implementation findings
└── RESEARCH.md # research write-up
This project was built around real mainnet constraints:
- Full Beacon state downloads are too large for request-time use
beaconcha.inenforces tight rate and pagination limits- freshly finalized epochs can lag provider indexing by about
2epochs - historical ranges should not be recomputed from scratch on every request
That is why the backend is built around:
- bounded provider concurrency
- retry with backoff
- explicit missing-row persistence
- SQLite materialization
- request-time live warmup for small gaps
- Historical accuracy depends on
beaconcha.infor efficient attestation history hydration - If upstream coverage is incomplete, the backend surfaces degraded or missing data rather than hiding it
- SQLite is the right fit for this assessment, but not the final answer for horizontally scaled multi-instance deployment
- Research write-up:
RESEARCH.md - Dev findings:
dev-notes.md