2026-03-10: GET /eth/v1/beacon/states/head on
https://ethereum-beacon-api.publicnode.com
returned 404 NOT_FOUND with a 52-byte error
payload, so this provider cannot be used as
evidence for full beacon state size.
- changed the api provider from public nodes to alchemy since public nodes doesnt support REST payloads
2026-03-10: curl -sS -o /tmp/beacon-state-head.json -w "%{http_code} %{size_download}\n" "https://eth-mainnetbeacon.g.alchemy.com/v2/OTNg2L1HgCOTYVgFBfpRA/eth/v1/beacon/states/head" 400 80
- returned `404 NOT_FOUND` (52-byte error
payload).
curl -sS -o /tmp/chainstack-debug-state.json -w '%{http_code} %{size_download}\n' 'https://ethereum-mainnet.core.chainstack.com/beacon/17686f418eea66c0e5e5afc129036e1d/eth/v2/debug/beacon/states/head' 200 922353372
- A live request to
/eth/v2/debug/beacon/states/ headon March 10, 2026 returned 922,353,372 bytes (~922.35 MB) for a single full BeaconState download. At 5 validators over 675 epochs (90 days), that would imply ~3,112.44 GB (~3.11 TB) of raw state downloads, which makes direct per- epoch state fetching operationally unreasonable for this dashboard.
2026-03-10: GET /eth/v1/beacon/rewards/attestations/{epoch} returns 404 on Chainstack — it is a POST endpoint, not GET; the Beacon API spec documents it as POST with validator indices in the request body.
2026-03-10: POST /eth/v1/beacon/rewards/attestations/433117 with body ["1344884"] returns ideal_rewards and total_rewards — this endpoint gives us exact per-validator attestation rewards broken down by head/target/source, directly from the beacon node's state transition. This is the gold-standard reconciliation source.
2026-03-10: Validator indices resolved from pubkeys:
- V1 (0x89ca...): index 1344884
- V2 (0xaf66...): index 1344886
- V3 (0x8d35...): index 1345223
- V4 (0xb936...): index 1345271
- V5 (0xa72e...): index 2176453
2026-03-10: Validator 5 (index 2176453) has effective_balance = 2,011,000,000,000 Gwei (2011 ETH) — this is a post-EIP-7251/Pectra consolidated validator already live on mainnet. MAX_EFFECTIVE_BALANCE (32 ETH) does NOT cap this validator's effective balance. This is directly relevant to RESEARCH.md Question 4 and means our dashboard must handle validators above 32 ETH correctly right now, not just hypothetically.
2026-03-10: For epoch 433117, ideal_rewards at 1 ETH increment: head=71, target=133, source=71. For 32 ETH: head=2299, target=4274, source=2303 (source slightly higher than head despite same weight=14 — this is because the participation rate for source is marginally higher than for head in this epoch).
2026-03-10: Epoch 433071, validator 1344884: inclusion_delay=2 (late by 1 slot). Beacon rewards API confirms: actual head=0, ideal head=2083, actual target=4273=ideal, actual source=2300=ideal. Confirms TIMELY_HEAD requires delay=1 exactly; delay=2 loses the entire head reward.
2026-03-10: Epoch 433120, validator 1344884: missed attestation (beaconcha.in status=0, inclusionslot=0). The beacon rewards endpoint returned no data key for this epoch — likely because it was not yet finalized at query time. Non-finalized epochs may not support the rewards endpoint.
2026-03-10: Fetching all active validators via /eth/v1/beacon/states/head/validators?status=active_ongoing timed out after 120s on Chainstack — the response is too large (~1M validators). This confirms we cannot derive totalActiveBalance by summing all validator effective balances directly. Instead we back-compute it from the ideal_rewards endpoint: ideal_head_1_increment = brpi * 14/64 * (participating/TAB). For near-perfect participation epochs, participating ≈ TAB, so brpi ≈ ideal * 64 / 14.
2026-03-10: The ideal_rewards include a participation-rate adjustment (they represent what a perfect validator would earn given the actual network participation), so back-deriving totalActiveBalance from them introduces ~1% error from participation rate < 100%. This is acceptable for the dashboard's reward computation and is documented as an accuracy trade-off.
2026-03-10: Single-epoch sanity check on epoch 433071, validator 1344884 (inclusion_delay=2, WRONG_HEAD): our computation says 2079 Gwei missed head, beacon truth says 2083 Gwei — 0.19% discrepancy. The 4 Gwei gap comes from the totalActiveBalance back-derivation rounding through the integer square root.
2026-03-10: Our getBaseReward() computed 9504 Gwei for a 32 ETH validator at epoch 433071. For a hypothetical 2048 ETH consolidated validator at the same epoch, the base_reward would be 2048/32 * 9504 = 608,256 Gwei — 64x larger. EIP-7251 doesn't change the formula, just allows higher effective_balance.
2026-03-10: Validator 2176453 (V5) is already a post-Pectra consolidated validator with effective_balance = 2011 ETH. Our code handles this correctly because we read effective_balance directly from the beacon state rather than capping at MAX_EFFECTIVE_BALANCE. The MAX_EFFECTIVE_BALANCE constant in constants.ts is documented but NOT used in getBaseReward() — the protocol-level cap is applied in the state itself.
2026-03-10: The backend package is configured to run src/index.ts, but the only Elysia bootstrap file currently in the repo is src/rewards/index.ts, so the planned Phase 3/4 orchestration has to start by fixing the application entrypoint layout before cache-backed routes can exist.
2026-03-10: The first Phase 3 timing run failed before any cache metrics were recorded because the configured Beacon RPC host (ethereum-mainnet.core.chainstack.com) returned ConnectionRefused from the sandboxed environment while resolving validator state by pubkey.
2026-03-10: The first successful Phase 3 timing baseline over epochs 433114-433123 took 43,212.73ms cold and 14,949.70ms warm with only 24 cache hits and 26 live misses on the second run, so the warm path was only 2.89x faster and clearly still leaking upstream work.
2026-03-10: After switching attestation hydration from per-validator-per-epoch fetches to one range fetch per validator, the next timing pass over epochs 433115-433124 improved cold time to 39,558.19ms but still left 20 warm misses, yielding 30 hits, 20 misses, and only a 2.76x warm speedup.
2026-03-10: Direct live probing showed beaconcha.in returned 429 Too Many Requests for GET /api/v1/validator/2176453/attestations?startEpoch=433115&endEpoch=433124, which explains why validator 2176453 never hydrated its cache rows under the earlier retry policy.
2026-03-10: The final Phase 3 timing run over epochs 433116-433125, after reducing beaconcha.in concurrency to 1 and increasing its retry budget to 5 attempts with 1s base backoff, took 52,444.97ms cold and 593.82ms warm with 50 hits and 0 misses on the second pass for an 88.32x speedup and 51,851.15ms saved.
2026-03-10: Compared with the first measurable baseline, the final warm-cache fetch improved by 14,355.88ms (14,949.70ms -> 593.82ms) and increased second-run cache coverage from 24/50 rows to 50/50 rows, while cold hydration became 9,232.24ms slower because the stricter beaconcha.in backoff waits out real provider rate limits instead of failing rows early.
2026-03-10: A local backend boot check hit EADDRINUSE on port 3001, so server startup verification had to be repeated on an alternate port rather than assuming the default port was free in the workspace.
2026-03-10: Bun/Elysia returned EADDRINUSE even when started with PORT=3101 and PORT=0, which indicates this workspace cannot currently verify local server binding behavior through a normal listen() call and the Phase 3 validation had to rely on typecheck plus direct script execution instead.
2026-03-10: tsc failed on the new root-level shared/contract.ts because module resolution looked for elysia in the workspace root instead of backend/node_modules; adding elysia as a root dependency fixed the cross-package contract import.
2026-03-10: treaty<App>() initially failed because frontend and backend resolved different physical copies of elysia, and Elysia's private dependencies field made the App types incompatible; pinning TS path resolution to the root node_modules/elysia fixed the contract bridge.
2026-03-10: Phase 4 runtime verification is still blocked in this workspace because bun run src/index.ts returned EADDRINUSE on fresh ports 3210 and 3211, so the new /api/* routes could be type-checked and linted but not exercised via local curl.
2026-03-10: 30d and 3m dashboard views were completely broken while 7d worked fine. Root cause: fetchValidatorAttestationsInRange made a single request to beaconcha.in with no limit or offset params. The API silently returns only its default page (≤100 records). 7d required 50 epochs — fits in one page. 30d required 225 and 3m required 675 — both exceeded the page size, so older attestations were absent from the map. The orchestrator saw null for every uncached epoch and classified them all as rpc_error missing rows.
2026-03-10: Fix — added limit=100 and offset pagination to fetchValidatorAttestationsInRange. The loop runs ceil(epochCount / 100) iterations and breaks early when a page comes back with fewer than 100 records. For 3m that's 203 pages per validator.
2026-03-10: The 10s DEFAULT_TIMEOUT_MS in beaconchainGet was too short for larger range responses. Attestation range fetches now use a dedicated ATTESTATION_TIMEOUT_MS = 30_000. Other beaconchain calls (single-epoch, checkpoint, etc.) keep the 10s default. beaconchainGet gained an optional timeoutMs param for this.
2026-03-10: BEACON_RPC_CONCURRENCY raised from 3 to 10. Cold-start for a 3m range requires ≥20,250 beacon-RPC calls (validator state + totalActiveBalance per epoch). At concurrency 3 that was ~45–90 s of sequential RPC work; at 10 it's ~15–30 s. The queue still gates actual parallelism so the beacon node isn't overwhelmed.
2026-03-10: The epoch range labels were wrong by a factor of ~30×. The assignment requires "7-day default, up to 3 months back" but the constants in use-epoch-range.ts were 50 / 225 / 675 epochs (≈ 5.3 hours / 1 day / 3 days). Correct values are 1,575 / 6,750 / 20,250 epochs (7 days / 30 days / 90 days). These were placeholders that were never corrected after development.
2026-03-10: MAX_EPOCH_RANGE on the backend was 675 — it would have immediately thrown a 400 error for any correctly-sized 30d or 3m request once the frontend was fixed. Raised to 90 * EPOCHS_PER_DAY = 20,250.
2026-03-10: To avoid magic numbers, exported EPOCHS_PER_DAY = 225 (= floor(86400 / 384)) from both shared/epoch.ts and backend/src/utils/epoch.ts. All range lookback values and MAX_EPOCH_RANGE are now derived from this constant, keeping them tied to the consensus spec rather than hardcoded integers.
2026-03-11: Plan 5b Architectural Review — The plan for background ingestion and DB-backed reads is sound and correctly identifies the request-path bottleneck. However, only ~25% (containment and chunked fetching) is currently implemented. The system still relies on on-demand upstream fetching for historical ranges.
2026-03-11: Critical Fix — /api/performance returned 503 with thousands of parse_error rows. Root cause: the beaconcha.in incomedetailhistory endpoint nests reward fields under an income key ({"income": {"attestation_source_reward": ...}, "epoch": ..., ...}), while our Zod schema expected them flat at the top level. Fixed in types.ts and endpoints.ts.
2026-03-11: Removed balance field from BeaconchainIncomeDetailHistoryEntry. It was not present in the nested income object and is redundant as we already fetch it from the dedicated balancehistory endpoint.
2026-03-11: Ingestion worker returned 0/5 successes for newly finalized epochs because beaconcha.in has a ~2 epoch indexing lag behind the chain's finalized head. A 400 error or empty data is returned if queried too soon. Fixed by adding a BEACONCHAIN_LAG_EPOCHS = 2 safety offset so the worker ingests finalized - 2.
2026-03-11: incomedetailhistory and balancehistory endpoints on beaconcha.in return a 400 ERROR when limit=1 is used with a batch of multiple validators (e.g., 5). The limit parameter controls the total number of rows returned, not the number of epochs. Fixed by multiplying the limit by the validator batch size (chunkEpochCount * validatorBatch.length).
2026-03-11: A 503 Service Unavailable with upstream_400 (101250 rows) on the /api/performance endpoint for a 3-month range query was directly caused by the above limit parameter bug. The batch multiplied the chunk size resulting in a limit param of >500, which exceeds beaconcha.in's hard max of 100. Fixed by computing epochsPerChunk = Math.floor(100 / validators) so the requested limit never exceeds 100.
2026-03-11: Core architectural decision — the backend will use a materialized 90-day cache in SQLite, backed by live beaconcha.in history endpoints plus Beacon RPC as the source of truth. The dashboard must not wait for an oldest-first worker to eventually reach the requested range. Instead:
- latest 7d is a startup readiness requirement
- the remaining 83d bootstrap runs in the background and persists progress
- request-time reads are always SQLite-first
- cold 7d/30d requests may synchronously fill missing historical chunks and persist them
- once the 90d bootstrap is complete, every later user/device reads the same durable cache instantly
2026-03-11: Main obstacle — the current ingestion worker still couples three different jobs into one oldest-first loop: cold bootstrap, user-facing cache warmup, and steady-state finalized sync. This produced unacceptable UX after restart because the default 7d dashboard could remain degraded while the worker was backfilling from the oldest epoch in the 90d window.
2026-03-11: Product-level consequence of the above coupling — a cold backend could require roughly 20+ hours before the default 7d view became fully exact if users only waited for the oldest-first background worker. This is not a provider requirement; it is a queue-ordering bug in our serving architecture.
2026-03-11: Request-path obstacle — the refactor correctly made /api/performance cache-backed and honest about degradation, but it became too conservative by only warming a tiny number of missing epochs on demand. That preserved correctness but regressed the old behavior where recent historical windows could be shown in seconds from direct bulk history fetches.
2026-03-11: Protocol obstacle — validator 2176453 was pre-activation for the earliest part of the 90d window (activation_epoch = 415859, warm window start around 413006). Treating those rows as not_found caused epochs to stay partial forever and trapped the oldest-first worker on the same range. Fixed by adding a lifecycle-aware not_applicable state that is excluded from retries, degradation, and the coverage denominator.
2026-03-11: Upstream obstacle — beaconcha.in historical endpoints are usable for the assignment, but the provider has strict request and pagination behavior:
incomedetailhistorypayloads are sparse and require defaulting omitted reward fields to zero- batch history
limitis capped at 100 total rows, not 100 epochs per validator - freshly finalized epochs lag the chain head by ~2 epochs
- the available key behaves close to 1 request/second under sustained load These constraints make "recompute 90d from scratch on every request" operationally wrong. They reinforce the materialized-cache design.
2026-03-11: Serving decision after the above debugging — the simplest robust path is:
- make latest 7d warm before the dashboard is considered ready
- bootstrap newest->oldest so the user-facing window is served first
- keep a durable 90d cache on disk
- after bootstrap, switch the worker to append-only finalized sync plus targeted gap repair
This preserves alignment with
context/ASSIGNMENT.md: live data, local caching, explicit missing epochs, and independent frontend/backend deployment.