Skip to content

tutankhAman/attest

Repository files navigation

Attest

Ethereum validator performance dashboard for the 5 tracked validators in the assignment. It fetches live Beacon chain data, computes missed attestation rewards using the Altair reward model, caches historical results locally, and shows the results in a React dashboard.

Live frontend: https://attest.frontend.vercel.app/
Repository: https://github.com/tutankhaman/attest

What This Project Does

  • Tracks 5 Ethereum validators over selectable windows: 7d, 30d, and 3m
  • Computes per-epoch missed reward breakdown for source, target, and head
  • Shows attestation effectiveness, missed ETH, failure counts, trend charts, and an epoch-by-epoch table
  • Uses live Beacon RPC plus beaconcha.in history endpoints
  • Persists a 90-day materialized cache in SQLite
  • Returns explicit missing/degraded states instead of pretending incomplete data is complete

Tracked Validators

Label Index Pubkey
V1 1344884 0x89ca023f...f4ba580fa
V2 1344886 0xaf6609c7...127d7bfdb
V3 1345223 0x8d357d15...c142aee76
V4 1345271 0xb936fc73...6e1d41891
V5 2176453 0xa72e6d79...f50ce74db

Product Overview

The dashboard is built as a small monorepo:

  • frontend/: React 19 + Vite + TanStack Query + TanStack Router + Recharts
  • backend/: Bun + Elysia REST API
  • shared/: epoch helpers, tracked validators, and API contract types

The frontend and backend are deployable independently. The frontend talks to the backend through VITE_API_URL.

Main Screens And Data Views

  • Validator summary cards
    • effectiveness %
    • total ETH missed
    • missed ETH by source, target, head
    • failure counts
  • Trend chart
    • per-epoch effectiveness for all 5 validators on one chart
    • missing epochs are shown as gaps
  • Epoch table
    • sortable and filterable
    • shows per-epoch flags, classification, and missed ETH
  • Health and readiness state
    • finalized epoch
    • cache readiness for 7d, 30d, 90d
    • degraded coverage and missing-epoch reasons

Range Model

One epoch is 32 slots x 12 seconds = 384 seconds, so:

  • 7d = 1,575 epochs
  • 30d = 6,750 epochs
  • 3m = 20,250 epochs

The frontend uses a 2 epoch finality buffer and the backend enforces a maximum range of 90 days.

Architecture

flowchart LR
    U[User Browser] --> F[Frontend<br/>React + Vite]
    F -->|GET /api/health| B[Backend API<br/>Bun + Elysia]
    F -->|GET /api/performance| B
    F -->|GET /api/validators| B

    B --> S[(SQLite materialized cache)]
    B --> R[Beacon RPC]
    B --> C[beaconcha.in]

    R --> B
    C --> B
    S --> B
Loading

Request Flow

flowchart TD
    A[Frontend selects range] --> B[GET /api/performance]
    B --> C[Read SQLite cache first]
    C --> D{Coverage complete?}
    D -- Yes --> E[Return full response]
    D -- No --> F[Live-warm newest missing epochs]
    F --> G{Enough coverage?}
    G -- Yes --> H[Return degraded response with missing rows]
    G -- No --> I[Return explicit error or 503]
Loading

Cache And Ingestion Flow

flowchart TD
    A[Backend starts] --> B[Fetch finalized checkpoint]
    B --> C[Warm latest 7d first]
    C --> D[Bootstrap remaining 90d in background]
    D --> E[Persist per-epoch rows in SQLite]
    E --> F[Serve later requests from cache]
    F --> G[Append finalized sync + targeted gap repair]
Loading

Reward Computation

This project implements the Altair missed-reward model directly in code.

For each validator and epoch:

  1. Load validator effective_balance
  2. Derive totalActiveBalance
  3. Compute base_reward
  4. Determine whether the validator earned:
    • TIMELY_SOURCE with weight 14
    • TIMELY_TARGET with weight 26
    • TIMELY_HEAD with weight 14
  5. Compute missed reward per failed flag:
missed_reward = (base_reward * flag_weight) / 64
  1. Sum missed source + target + head

Classifications used by the UI:

  • CORRECT
  • MISSED_ENTIRELY
  • WRONG_SOURCE
  • WRONG_TARGET
  • WRONG_HEAD
  • LATE_INCLUSION

API Surface

GET /api/health

Returns backend readiness and ingestion state.

Important fields:

  • status: ok | warming | degraded
  • finalizedEpoch
  • dashboardReady
  • ingestion.latest7dReady
  • ingestion.latest30dReady
  • ingestion.latest90dReady
  • queueStats

GET /api/validators?indices=...

Returns validator metadata:

  • index
  • pubkey
  • pubkeyShort
  • balance
  • effectiveBalance
  • status

GET /api/performance?indices=...&startEpoch=...&endEpoch=...

Returns:

  • per-validator summary cards
  • per-epoch rows
  • missing epoch metadata
  • coverage ratio
  • degraded state information

The shared response contract lives in shared/contract.ts.

Local Setup

Prerequisites

  • Bun 1.3.10 or newer
  • Node.js 18+
  • One Beacon RPC endpoint
  • One beaconcha.in API key

1. Install dependencies

From the repository root:

bun install

2. Configure backend environment

Copy the backend example file and fill the secrets:

cp backend/.env.example backend/.env

Minimum backend config:

BEACON_RPC_URL="https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY"
BEACONCHAIN_API_KEY="your_free_tier_key"
PORT="3001"
ENABLE_INGESTION_WORKER=true
VALIDATOR_INDICES=1344884,1344886,1345223,1345271,2176453

3. Configure frontend environment

Create frontend/.env.local:

VITE_API_URL="http://localhost:3001"

4. Run the app

Run both apps from the monorepo root:

bun run dev

Or run them separately:

cd backend
bun run dev
cd frontend
bun run dev

Expected local URLs:

  • frontend: http://localhost:5173
  • backend: http://localhost:3001

Environment Variables

Backend

Variable Default Purpose
BEACON_RPC_URL none Beacon REST API base URL
BEACONCHAIN_API_KEY none beaconcha.in API key
PORT 3001 Backend port
CACHE_DB_PATH epoch-cache.sqlite SQLite file location
ENABLE_INGESTION_WORKER false Enable background ingestion worker
VALIDATOR_INDICES empty Comma-separated validators tracked by ingestion
INGESTION_WARM_WINDOW_EPOCHS 20250 Target warm cache size
INGESTION_STARTUP_DELAY_MS 0 Delay before startup bootstrap begins
INGESTION_BOOTSTRAP_CHUNK_EPOCHS derived Epochs processed per bootstrap chunk
INGESTION_BACKGROUND_CHUNK_EPOCHS derived Epochs processed in background backfill
INGESTION_STARTUP_STALL_LIMIT 3 Startup retry threshold
REQUEST_WARM_EPOCH_LIMIT 4 Max missing epochs warmed during a request
LIVE_WARMUP_TIMEOUT_MS 120000 Upper bound for request-time warmup
BEACON_RPC_CONCURRENCY 10 Beacon RPC concurrency
BEACONCHAIN_CONCURRENCY 1 beaconcha.in concurrency
BEACONCHAIN_REQUEST_SPACING_MS 5000 Minimum spacing between beaconcha.in requests
RPC_EPOCH_CONCURRENCY 10 Epoch-level RPC fetch concurrency
RATE_LIMIT_EXTRA_DELAY_MS 5000 Extra wait for retryable rate limits

Frontend

Variable Default Purpose
VITE_API_URL http://localhost:3001 Backend base URL

Useful Commands

Root

bun run dev
bun run build
bun run check-types
bun run check
bun run fix

Backend

cd backend
bun run dev
bun run test
bun run reconcile
bun run backfill --window=7d
bun run backfill --startEpoch=431000 --endEpoch=433200
bun run cache-inspect
bun run verify-implementation

Frontend

cd frontend
bun run dev
bun run build
bun run check-types

Deployment

Frontend

The frontend is deployed on Vercel:

Root vercel.json:

  • runs bun run build --filter=frontend
  • installs with bun install
  • publishes frontend/dist

Backend

The backend is containerized with backend/Dockerfile.

Build from the repository root:

docker build -f backend/Dockerfile -t attest-backend .

Run:

docker run --rm -p 3001:3001 \
  -e BEACON_RPC_URL="https://your-beacon-rpc" \
  -e BEACONCHAIN_API_KEY="your-api-key" \
  -e ENABLE_INGESTION_WORKER=true \
  -e VALIDATOR_INDICES="1344884,1344886,1345223,1345271,2176453" \
  -v "$(pwd)/data:/data" \
  attest-backend

The container stores SQLite data in /data/epoch-cache.sqlite by default. Mount /data if you want cache persistence across restarts.

Repository Layout

.
├── backend/
│   ├── src/
│   │   ├── beacon/          # upstream Beacon RPC + beaconcha.in clients
│   │   ├── ingest/          # SQLite-backed materialization and workers
│   │   ├── performance/     # request path and warmup logic
│   │   ├── rewards/         # Altair reward math and classification
│   │   └── routes/          # Elysia API routes
│   ├── scripts/             # reconciliation and verification scripts
│   └── Dockerfile
├── frontend/
│   └── src/
│       ├── components/      # cards, chart, table, UI primitives
│       ├── hooks/           # health and performance queries
│       ├── routes/          # dashboard route
│       └── api/             # Eden client
├── shared/                  # contract, epoch helpers, tracked validators
├── context/                 # assignment, plan, architecture notes
├── dev-notes.md             # implementation findings
└── RESEARCH.md              # research write-up

Provider And Data Constraints

This project was built around real mainnet constraints:

  • Full Beacon state downloads are too large for request-time use
  • beaconcha.in enforces tight rate and pagination limits
  • freshly finalized epochs can lag provider indexing by about 2 epochs
  • historical ranges should not be recomputed from scratch on every request

That is why the backend is built around:

  • bounded provider concurrency
  • retry with backoff
  • explicit missing-row persistence
  • SQLite materialization
  • request-time live warmup for small gaps

Known Limitations

  • Historical accuracy depends on beaconcha.in for efficient attestation history hydration
  • If upstream coverage is incomplete, the backend surfaces degraded or missing data rather than hiding it
  • SQLite is the right fit for this assessment, but not the final answer for horizontally scaled multi-instance deployment

Additional Docs

About

Clear visibility into validator attestations.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages