-
-
Notifications
You must be signed in to change notification settings - Fork 435
Lodestar Weekly Standups 2026
The Lodestar team hosts planning/standup meetings weekly on Tuesdays at 3:00pm Universal Standard Time. These meetings allow the team to conduct release planning, prioritise tasks, sync on current issues, implementations and provide status updates on the Lodestar project. For more information or to participate in the public meetings, see https://github.com/ChainSafe/lodestar/discussions/categories/standups
Note that these notes are transcribed and summarized by AI language models and may not accurately reflect the context discussed.
Agenda: https://github.com/ChainSafe/lodestar/discussions/8741
Recording/Transcript: https://drive.google.com/drive/folders/10ojhBrviq5qgDXnqIKqcC4jQNDojbYjS
Lodestar standup on 20 Jan 2026 focused on pnpm migration fallout, Fast Finality implementation status, Lodestar Z + nAPI progress, backfill work, and a proposed βconnected validatorsβ API/logging strategy.
- Yarn β pnpm change removed the incidental
node_modules/.bin/lodestarsymlink that some downstream Docker-based deployments (e.g. an EF wrapper Dockerfile) were calling directly, because pnpm does not hoist all bin scripts the way Yarn did. - Team agreed this behavior was never an official public interface: the supported entrypoint is the repoβroot
./lodestarwrapper, which is independent of packageβmanager internals. - They decided not to add
@chainsafe/lodestar-clias a root dependency or introduce a rootbinsolely to recreate the old symlink; instead, this is treated as not product breaking but operationally breaking and will be handled via release notes telling source builders and derivatives to:- Nuke existing Yarnβstructured
node_modulesand caches before runningpnpm install. - Switch scripts from
node_modules/.bin/lodestarto the topβlevel./lodestarentrypoint.
- Nuke existing Yarnβstructured
- Chiemerie will run playbooks to delete old source trees and
node_moduleson the teams internal hosts to prevent mixed Yarn/pnpm trees; developers are asked to rebase branches on pnpm and use the standardmake start-beacon-and-validatorsplaybook. - A Docker build CI check still does not exist; adding a Docker build job remains an open action item to prevent future image regressions.
- Discussion of 1.39 metrics was deferred due to the server used for A/B tests going down quitely last week.
- Lodestar Z metrics PR should be merged and exported via nAPI as a
getMetrics()-style string, similar to the current network thread metrics export.
- Nazar reported on the historical JSβlevel βstate diffβ feature branch: fully finishing it in the current JS stack would cost ~3β4 weeks of work.
- Given the roadmap, the team prefers to not complete the JS implementation and instead:
- Merge the two existing open PRs into a dedicated feature branch.
- Archive that branch as design and code reference for a future native implementation of state diff within Lodestar Z.
- Rationale:
- The state transition is moving natively; duplicating work in JS now would delay higherβpriority items like Fast Confirmation timelines.
- Nazar has implemented state diff twice already and expects the native version to be faster to build with the existing knowledge when the time is right.
- Matt asked whether native state diff might be needed early for native state storage; consensus was that it is a valuable feature but should follow after the current productionβdriven priorities.
- Navie reβwatched the ACDT call and reported rough consensus to freeze the net zero spec, with a few open PRs on the consensus spec repo whose inclusion status is still unclear.
- On Lodestarβs side:
- Navie is still addressing review comments on the forkβchoice PR
- A draft PR implements stakeβbuilder deposit and withdrawal logic and passes stateβtransition spec tests, though local issues were found; Navie does not fully trust current spec test coverage and will iterate before requesting full review.
- Gossip validation changes from recent P2P spec tweaks still need to be reflected in the Lodestar gossip PR.
- Timeline:
- Navie reiterated an endβofβFebruary 2026 target for Lodestarβs devnet-0 implementation readiness.
- Testing call feedback: Prysm appears ahead but also skeptical that endβFebruary is realistic; Lodestarβs incremental approach may place it ahead of other clients, but crossβclient readiness could slip beyond that date.
- Lodestar is being used as one of the consensus clients in external testing for BAL devnets; testing teams expressed appreciation for Lodestarβs support and infra.
- A dedicated napi branch in the
lodestar-zrepo hosts ongoing bindings work by Cayman, Rahul, and Jeff. - Current nAPI bindings:
- Global singletons for pubkey cache, pool, and config.
- A
BeaconStateViewbinding that can deserialize a state from SSZ bytes into a tree and exposes ~half of the planned getters (slot, latest block header, etc.).
- State transition and other native paths are still being wired in; there are no hard blockers, but the exact shape is still forming.
- Performance notes:
- Creating/caching beacon state is costly:
- Publicβkey decompression is expensive (already a bottleneck in JS Lodestar).
- Deserializing a full state into a tree plus building a cached beacon state currently takes ~8 seconds on Caymanβs machine, assuming the pubkey index is already cached.
- Cayman added a feature in Lodestar Z to save and load pubkey caches from disk, reducing startup from ~30s to ~100ms for repeated runs; integration into production Lodestar is still to be evaluated.
- Creating/caching beacon state is costly:
- Design tradeβoffs:
- Currently each BeaconState property getter is a separate nAPI call; this is correct but potentially chatty.
- Future optimization options discussed:
- Prepopulate immutable or rarely changing fields as cached JSβside values rather than repeated native calls.
- Introduce more aggregated getters where profiling shows call overhead is nonβtrivial.
- Cayman described a proposed refactor βSTF compβtime forkβ:
- Today, Lodestar Z mirrors JS by using runtime fork dispatch for forked types (beacon state, block, execution payload, etc.), checking fork type on many operations inside the transition.
- In reality, a node runs in one fork at a time, with rare fork boundaries; most work is in the current fork.
- Proposal: make
forka comptime parameter in Zig and use a forkβwrapper type to resolve the concrete types for that fork, so each fork will have 6 or 7 versions, but each one will run faster because it doesnt have to dynamically check its fork type each time.
- Consequences:
- Pros: removes repeated runtime fork checks, avoids mismatches between beacon state / block / payload header fork types in code, and more closely models the actual invariants.
- Cons: multiple compiled versions (one per fork) for many functions, but the number of forks is small (~6β7), so binary size growth is acceptable.
- A topβlevel dynamic fork value would still exist to choose the fork once, then call into the fully specialized transition for that fork.
- Discussion acknowledged nested forkβspecific operations; Caymanβs model uses a single dynamic switch at the top and a purely forkβspecific interior, avoiding nested runtime fork switches.
- Recently merged:
- Treeβview PR landed in
mainin Lodestar Z, a large structural change enabling more efficient, structured access to SSZ trees of beacon state.
- Treeβview PR landed in
- In progress:
- Kai:
- Cleaned up memory leaks on an errorβhandling branch using
errdeferand other resourceβmanagement patterns. - Added benchmarks that show process attestation dominates runtime due to many treeβview reads for slashing protection data; two performance issues were opened to explore caching strategies.
- Rebasing the load state PR which is required for Lodestar integration.
- Cleaned up memory leaks on an errorβhandling branch using
- Bing:
- Reviewed Rahul and Jeffβs bindings PRs.
- Refactoring a large
utilsdirectory for better structure post treeβview. - Finalizing the metrics PR and resolving conflicts so metrics can be exposed and then bound through nAPI.
- Jeff:
- Implemented
inner_shuffle_listand addressed review feedback. - Adding more
getValidatorand other getters in the nAPI bindings; hit issues with large data around withdrawal epochs that require followβup fixes. - Starting on a βclockβ abstraction for Lodestar Z, prompted by a coβworking session with Matt.
- Implemented
- Kai:
- Timers and I/O:
- Matt, Jeff, and Kai highlighted timers as a major architectural question: current JS Lodestar uses timers widely (gossip, libp2p, slots), but Zig has no I/O layer yet (waiting on version 0.16).
- One suggestion is integrating libuv for event loop, callback queue, and worker pool, giving a robust timing and async work foundation; prior Docker/cgroups issues with libuv need reβevaluation.
- This is recognized as a multiβmonth concern that must not block other Lodestar Z work; clock PR will be the first visible product of this exploration.
- Tuyen:
- Opened PRs simplifying gossip message validation and consuming shuffling from node (awaiting final approvals).
- Consolidated work into a feature branch; static integration currently fails with ~10 compile errors, targeted for resolution to produce a definitive latest BeaconState layout that others can depend on tomorrow.
- Validator pubkey cache integration:
- In Lodestar Z, there is a single pubkey cache implementation; Lodestar intends to consume that from Lodestar Z rather than maintaining distinct caches for JS vs native, but the exact wiring is still being designed.
- Vedant:
- Focused on debugging endβtoβend tests for backfill logic, discovering inconsistencies between backfill DB and block archive (missing blocks and gaps).
- Adjusted getDevBeaconNode testing utilities: when no anchor state is passed, current behavior always rebuilds from genesis, which is wrong for βrestart from partially backfilled DBβ tests.
- Introduced an
initStateFromDbpath to initialize from existing DB state when restarting, allowing correct test coverage of partial backfill and restart flows. - Large backfill PR (~2,000 lines) is under Mattβs review; Vedant requested additional review particularly on the testing architecture.
- Context: Infastructure has a request for a tool for exposing which validator indices are currently connected to a beacon node to prevent issues when migrating validators (to prevent slashing conditions).
- Operational need (from Chiemerie / DevOps):
- When shifting validators between nodes or using fallback nodes, infra needs nearβrealβtime, machineβreadable confirmation of which validators are attached to which beacon nodes to avoid double attestation and to perform safe changeover.
- The current validator monitor has a 3βepoch removal delay, making it ~18 minutes before disconnected validators disappear, which is too slow for operational tooling.
- Options/requirements discussed:
- An API endpoint returning validator indices per node (intended to be polled epochβbyβepoch or every minute and fed into a custom dashboard).
- Faster convergence than 3 epochs; something like βwithin 1 epochβ or βwithin several slotsβ was considered acceptable.
- Design ideas:
- Use the proposal cache (which tracks epoch) rather than the slowerβupdating validator monitor; API could expose only the latest epochβs data (or last 1β2 epochs) to reduce lag.
- Implementation options:
- Logβbased: emit perβepoch log lines listing connected indices; use Loki to query one epoch at a time for dashboards. The team reconsidered earlier concerns: volume per epoch is limited enough that Loki queries are likely fine.
- Metricβbased: a single metric with a large label containing indices; technically allowed, but considered less attractive than structured logs or a JSON API.
- Security/infra:
- Public API exposure is not desired; typical deployment would allow internal polling from a restricted IP via firewall rules, or rely on log aggregation (Loki) that already collects beacon logs.
- Next steps:
- Philip suggested opening a direct discussion with Chiemerie to capture the exact operational workflow, constraints (multiβcloud, validator/beacon separation), and whether an internal HTTP API or Lokiβbased approach better fits the current infra.
- Performance reviews: Matt reminded everyone to complete performance review forms by the end of the week; peer feedback is encouraged and expected.
- Interop / visas: Interop attendance list has been sent; Matt is coordinating with the EF on Tweβs visa, and asked whether anyone else needs visa support (none reported).
- Security hygiene: Team reminded to enable forced commit signing on GitHub; anyone who has not yet done so is asked to enable it.
Agenda: https://github.com/ChainSafe/lodestar/discussions/8727
Recording/Transcript: https://drive.google.com/drive/folders/1881nmUIk8mqDoIrrkGyHncdaf2HR-D-M
The team successfully merged the PNPM migration, with most developers adjusting their automated systems to work with the new workflow.
Outstanding Issues:
- One remaining issue with SeaMonkey on Lodestar General channel, being resolved by Nazar
- Team confirmed access to EF Steel Server (DevOps Discord) for coordination with ETH Panda Ops
Security Recommendation:
- Phil strongly encouraged all team members to enable forced commit signing on GitHub to display verified commits and prevent impersonation attacks
- A ChainSafe developer was recently impersonated on GitHub
- Blog post on supply chain attack mitigation coming soon
Release Schedule:
- RC (Release Candidate) scheduled for Friday, January 17th
- Currently minimal content in the release pipeline, primarily Obol updates from Nico
- Deadline: Anyone with features requiring reviews for v1.39 should notify Phil immediately
Status:
- No progress on capturing core dump to debug QUIC segfaults
- Matt attempted to complete this yesterday but was unsuccessful; will try again tonight or tomorrow
Spec Updates:
- Minor spec changes last week, not significant enough for blog post publication
- Bug fix in partial withdrawal sweep merged
- Discussion on builder bid and withdrawal sweep for builder balance
Fork Choice Implementation (NC):
- Major PR: First draft of EPBS fork choice implementation completed
- Critical Risk: This PR rewrites a significant portion of the fork choice logic in Lodestar, making it "a little bit dangerous"
- Key Difference from Other EPBS Work: Unlike state transition code (which is guarded by fork-specific conditionals), fork choice code is intertwined with the current production code" with minimal isolation between forks
Deployment Strategy:
- Team agreed to merge if confident, with extensive testing:
- Spec tests
- Feature group deployment on Fulu testnet
- Mainnet deployment with monitoring
- Collect feedback iteratively
- Concern: Long-running feature branches like previous Fulu branch create merge conflicts
- Preference: Short-lived branches for urgent deadlines vs. long-maintained branches with regular commits
- Timeline: DevNet-0 timeline still not announced; waiting to assess urgency
Testing Infrastructure Needs:
- Current test infrastructure doesn't support EPBS testing
- Need builder integration for Fulu fork groups
- Plan to adapt existing Electra-to-Fulu transition PR to test only two forks (Fulu β Glamsterdam)
ZAPI (N-API Zig) Library:
- Cayman and Nazar completed review call on higher-level interfaces
- Decision: Nothing blocking moving forward with lower-level library for bindings
- Trade-off: More code required, but provides visibility into what's happening (easier debugging vs. QUIC's thousands of lines of auto-generated hidden code)
- Nazar will prototype intermediate/higher-level interface ideas
Lodestar TypeScript Refactoring (Tuyen):
- Three PRs ready for review:
-
Two remaining blockers:
- Refactor shuffling cache (Matt and Rahul working on this)
- Remove dependency on calling fork-specific APIs like
cloneandcommit
-
Big Integration PR: After these 5 PRs merge, Tuyen will create large branch converting everything to
CachedBeaconStateViewinterface - Goal: Complete before Tuyen's vacation (~ mid-Feb).
The team had an extensive discussion about improving PR review culture and notifications:
Current Challenges:
- GitHub notifications can be noisy for developers following many repos
- Not everyone proactively checks PR notifications daily
- Time-sensitive/blocking PRs need better visibility
Proposed Solutions:
- Culture shift: Proactively check two main repos (Lodestar and Lodestar-Z) daily for new PRs
-
Urgent PRs: Post in core Lodestar team's Discord
lodestar-privatechannel when reviews are blocking - Mobile workflow: Nazar suggested using GitHub mobile app to triage notifications immediately
- Fresh start: Team discussed clearing all stale notifications to start with clean slate
Agreement:
- Daily proactive review culture for main repos
- Discord notifications for urgent/blocking PRs
- Post PR links in lodestar-private channel when reviews needed
Major Progress:
- Cayman and Kai (grapebaba) tag-teaming refactor of state transition to use Tree Views since end of last week
-
Completed: Updated
BeaconStateAllForksstruct β entire state transition (processing slots, blocks, epochs) - Current Phase: Working through spec test failures (past compilation phase, now runtime failures)
- Goal: Finish this week to avoid long-running vulnerable state with large open PR
Design Philosophy:
- Minimal patch approach: Did not optimize or redesign Tree View internals before integration
- Rationale: Didn't have enough information to understand how design affects implementation without actual usage
- Next Phase: After refactor is completed, team will have data to inform Tree View optimizations and redesigns based on real performance and semantic impacts
Tree View Implementation Discussion:
- Team planned to discuss Tuyen's and Kai's competing Tree View approaches on Thursday roadmapping call
- Kai's approach: Focuses on memory ownership management (not optimization), making ownership UX more correct
- Current state: Using original design to make spec tests work first, then will evaluate improvements from working baseline
The team had a lengthy debate about priorities for Nazar's state diff work:
Arguments For:
- Feature provides performance optimization for state storage
- Already significant work invested with multiple PR iterations
- Can be implemented as feature flag (opt-in, not default behavior)
- Doesn't depend on state layout/algorithm changesβonly needs raw bytes between two states
Arguments Against (Cayman's position):
- Not on critical path for 2026 goals discussed at Buenos Aires retreat
- Adds additional constraint when focus should be on Zig state transition integration and migration
- Similar to gossip sub partial messages: adds complexity at inopportune time
- "Nice to have" optimization vs. required feature
- Risk of premature optimization that locks in backwards compatibility before Zig transition is complete
Historical Context:
- Previous workflow issue: Large PR was split into smaller pieces for review, but this created moving-target branch that takes longer to review
- Team acknowledged this stop-start-restart approach was challenging for Nazar
- Different from N Historical States approach: That work was fully completed, then merged piece-by-piece into unstable protected by feature flags
Decision:
- No immediate decision made
- Team will "sleep on it" and revisit next week
- Nazar to evaluate remaining effort (3-4 days vs. more)
- If minimal effort, may proceed; if substantial, defer until later in 2026 when bandwidth increases
Timeline Projection:
- Matt projected Zig state transition could be ready for merging end of February to end of March, possibly before end-of-April interop
- Ambitious goal: Native state transition running on DevNets for Glamsterdam (even if not DevNet-0)
- This would represent "knock it out of the park" achievement
v2.0 Major Version Considerations:
- If state-transition on mainnet becomes the point of major upgrade to v2.0, there's an entire GitHub milestone of breaking changes to consider to implement simultaneously
- Phil reminded team to keep this in mind for planning
Cayman
- Completed ZAPI low-level interface review with Nazar
- Tag-teaming Tree View integration with Kai; state transition refactor now compiling and running spec tests
Tuyen
- Merged latest mainnet changes into Tree View branch
- Enhanced
cloneAPI reviewed and merged by Bing - Two PRs ready for review (remove pre-Electra block production, generalize beacon state repository)
Bing
- Pair review with Cayman on Tree View implementation was fruitful
- Discovered misnamed tests (integration tests that should be module tests)
- Opened cleanup PRs to reorganize test structure
- Metrics PR ready for review (will use Era file blocks, not urgent)
- Opened PR to simplify bit list and bit vector types (can be delayed until Tree View merges to avoid conflicts)
Jeff
- Updated blinded block tests
- PR reviewed by Tuyen and Bing
- Working on bug discovered this morning, knows the root cause and fixing it
Vedant
- Struggling with backfill test scenarios affecting implementation logic
- Fixing bugs iteratively on both testing and implementation sides
- Scheduled call with Matt to resolve issues
Nazar
- Resolved PNPM issue with SeaMonkey on Lodestar General
- Proposal: Add CI pipeline to check Docker builds (external team caught Docker build failure that CI missed)
- Standalone CI job (not blocking build/test pipeline)
NC
- Completed first draft of EPBS fork choice implementation (major PR)
- Will fix remaining spec test failures in the PR
Matt
- Coordinating Zig/Glamsterdam work timing for Thursday discussion
- Matt: Capture QUIC core dump for debugging (tonight or tomorrow)
- Phil: Submit interop attendance list (confirmed: no blockers from main team except Matt)
- Team: Enable forced commit signing on GitHub
- Team: Post urgent/blocking PRs in Discord private channel
- Nazar: Add Docker build check to CI pipeline
- Nazar: Evaluate remaining effort for state diff feature for next week's discussion
- Cayman & Kai: Continue Tree View integration; target completion this week
- Tuyen: Get PRs reviewed for beacon state refactoring
Agenda: https://github.com/ChainSafe/lodestar/discussions/8699
Recording/Transcript: https://drive.google.com/drive/folders/1OlxCfZnQPyelKVETeBPeIzdEKBE1pOX6
Limited changes in current diff;
- Primarily Obol updates from Nico completed over holidays. No rush for immediate release; willing to wait for additional features nearing completion.
- RC target: January 16th (end of next week) if additional features ready. No devnet timeline yet for Glamsterdam implementations; specs still in flux.
Spec Status:
- Stake builders PR merged (24 hours before standup; no teams have implemented yet).
- Minor refactoring & fork choice PRs on consensus specs repo over holidays (loophole closure regarding proposals in EPBS reorg scenarios).
- Justin preparing new v1.7.0-alpha.0 spec release;
- No date revealed for devnet-0 and will anticipate more planning on this week's ACDC.
- Many refactoring PRs pending especially around partial withdrawal sweep and deposit request handling.
- NC not blocked on implementation; fork choice work proceeding. Tracking progress through Glamsterdam tracking epic issue: https://github.com/ChainSafe/lodestar/issues/8439
Implementation Status:
- No implementations started for stake builders (merged too recently); Prysm particularly blocked awaiting spec PR merge.
- EPBS Breakout room cancelled; focus shifted to ACD discussions Thursday.
- Infrastructure restructure playbook deployments completed with new hostname standardization.
- 20 Hoodie validators accidentally slashed during deployment (joke: testing slashing mechanism works).
- Feature groups (1-4) recovering; all operational within one day.
- Metrics now easier to read with clear differentiation between super-node and subscribe-all-subnet performance.
-
Feature 1 still problematic due to QUIC deployment crashes from segfaults discussed in late 2025.
- Matt and Cayman plan to capture a core dump this week to debug. If the issue lies within
napi-rs, the team may abandon that approach.
- Matt and Cayman plan to capture a core dump this week to debug. If the issue lies within
-
ZAPI Updates: Cayman completed work on "ZAPI" (formerly Napi-Zig) over the break. The library now exposes ~100% of the N-API surface.
- The current state is a low-level thin wrapper.
- Discussions are ongoing regarding the design of high-level interfaces to ensure memory safety and ease of use.
- Cayman suggested potentially scrapping the current high-level automatic wrapper design in favor of a better approach.
-
TreeView Integration:
- A significant decision point exists regarding integrating TreeView into
beacon-state. - Two different implementation approaches (one from Tuyen, one from Kai) need to be synthesized to resolve issues with synchronizing updates across child views.
-
Goal: The team aims to integrate TreeView into the
beacon-stateoptimistically by the end of January. This is a foundational step for both the state transition refactor and the bindings work. - It was clarified that work on bindings can start in parallel with TreeView integration, though final merging will depend on the Tree View implementation being correct.
- A significant decision point exists regarding integrating TreeView into
-
Low-Level vs. High-Level: Cayman confirmed that
Zappynow exposes 100% of the N-API surface as a thin, low-level wrapper. However, the team decided that the current implementation of the "automagic" high-level interface (which uses a hint system for type wrapping) is likely the wrong design direction. - Decision: The team agreed to likely scrap or deprecate the current high-level wrapper code in favor of a redesign. For the immediate future, developers should enforce using the low-level interface or work offline (specifically Nazar and Cayman) to design a better high-level abstraction layer that minimizes memory leak risks.
Dependency on Tree View Data Structures
- A major architectural decision was made regarding what data structure the bindings should actually wrap.
-
The Problem: Currently, the codebase operates on a
structversion of the beacon state (BeaconStateAllForks). However, the long-term goal is to use a "Union of Tree Views". - The Decision: The bindings integration should be built directly on top of the Tree View foundation rather than the current struct implementation. The goal is to have both the JavaScript bindings and the Zig state transition logic build from the same underlying data structure.
- Implication for Setters: This shift requires changing how state updates are handled. Instead of updating pointers to data (current struct approach), the bindings must use explicit function call setters, as the Tree View approach does not maintain direct pointers to the underlying data.
Parallel Execution & Timeline *. The team addressed whether the bindings work is strictly blocked by the unfinished Tree View implementation (specifically the synthesis of Kai and Tuyen's differing PRs).
- Non-Blocking Workflow: It was decided that work on bindings does not need to halt until the Tree View is perfectly merged. Developers can create the necessary interfaces and data structures now and start writing the binding logic against them.
-
Integration Goal: While the final "green checkmark" (passing CI/tests) will require the Tree View logic to be correct and merged, the team is targeting the end of January to have the Tree View integrated into the
beacon-state. This will serve as the stable foundation for the bindings to be fully merged.
Memory Safety Considerations Matthew raised concerns about memory leaks, specifically regarding handle scopes in the new bindings.
- Assessment: Cayman noted that because the current bindings are very thin, leaks are less likely to stem from the library itself and more likely to arise from how they are used in the implementation logic.
-
Plan: The team agreed to cross the bridge of strict memory leak profiling (e.g.,
escapable_handle_scope) later, as the immediate priority is stabilizing the interface design.
- The team is ready to switch from Yarn to PNPM.
- Nazar resolved a CI caching issue that was holding this back.
- The switch will require everyone to update their local workflows (e.g.,
pnpm installinstead ofyarn). - Infrastructure will need to merge their ready PR with new playbook commands for proper deployments.
- Nazar has prototype code for changes in the fork choice rule.
- He expects to have a PR open early next week.
Phil:
- Infrastructure playbooks deployed with new hostnames.
- Accidentally slashed 20 Hoodie validators (validates slashing works).
- Feature groups 1-2 operational; 3-4 recovering.
- Targeting v1.39 RC by January 16th.
- Planning Zig roadmapping call for January 15th to discuss bindings/integration design.
Robby:
- Scheduling 45-minute one-on-one sessions with each team member this week for onboarding.
Cayman: ZAPI Bindings:
- Fully exposed ZAPI library (100% of NAPI surface now exposed vs. prior 80%).
- Lowest-level interface complete (thin NAPI wrapper); higher-level "magical" bindings need additional work.
- Examples provided for async, thread-safe, class wrapping, normal functions.
- Memory leaks likely in usage patterns rather than bindings; will monitor during integration.
- Current higher-level wrapper approach may be wrong (hint system for type returns problematic); considering redesign; discussing offline with Nazar.
Lodestar-Z Work:
- ERA file support added: reader, writer, download script for mainnet data integration testing.
- Real data testing strategy: use ERA files to create state, feed blocks into state transition, capture metrics.
- Metrics testing: integrate with Bing's metrics PR; use perf/flame graphs on real data.
- SSZ core cleanup PR open (refactor SSZ tree API tweaks); part 2 anticipated.
- Config module cleanup PR (polishing).
Tree-View Design Discussion:
- Two competing approaches from Tuyen and Kai for fixing tree-view child synchronization (updates not syncing across child/parent views).
- Need to synthesize both approaches; requires deep review.
- Timeline estimate: ~2 weeks to resolve design.
- End-of-January goal: integrate tree view into state transition (create data structure interface).
- Beacon state union typing not hard-blocked by tree-view completion; can start interface definition independently.
Tuyen: Lodestar-Z State Transition:
- Changed integration approach: now creating interface + implementation simultaneously with local branch for incremental blocking identification.
-
Refactoring completed:
- Simplified
getBlockSignature - Moved API implementation to state transition per fork
- Stopped supporting bundling pre-Electra attestations (thinner test code) - not supporting old forks
- Simplified
-
Blockers identified:
- Remove shuffling cache from state transition (taken care of by Matt and Rahul)
- Stop calling specific API of beacon state allforks; only time we mutate is during state transition
- Will continue with the integration and try to target PR readiness before vacation
- Cannot provide exact timeline due to large codebase; proceeding incrementally.
- Additional refactor needed for pointer updates β explicit setter calls (tree-view incompatible).
- Tree-view Design: Local branch approach for integrating tree-view; can identify blockers during implementation.
Matt:
Cell-level DAG work in progress; acknowledges upcoming challenges.
- No real timeline on when it needs to be ready, but should be prioritized
QUIC debugging:
- Core dump capture planned this week (Node.js v24).
- If NAPI-RS bindings root cause, will abandon approach (long blocker).
- Next standup should have core dump for analysis.
Planning & Timeline:
- Updating waterfall/mermaid chart for January-February-March-April timeline.
- Wants confirmation by end-January tree-view integration realistic; if February, will adjust.
- Clarified: tree-view design review & merge not blocking state transition interface definition or binding work (can parallel-work with async review delays).
Kai: Tree-view API Implementation:
- Multiple PRs already merged for tree-view API implementations; can optimize/refactor.
- Main remaining work: tree-view data structure design (resolving Tuyen vs. Kai approaches).
- Post-API completion: integrate tree-view into beacon state.
- Goal end-of-January: integrate beacon state tree-view.
- Parallel work possible: implement current tree APIs while design reviews happen.
Bing:
Metrics PR:
- Ready for review; small usability issue resolved by forking library subset locally.
- Next phase: test metrics using ERA file support (from Cayman).
Epoch Processing:
- Ported final epoch process (rotating proposals, caching) to Zig.
- Various PR reviews completed.
Jeff:
- Working on blinded beacon block header tests.
- Currently debugging issues; expects resolution this week.
- Available for additional help. Will discuss internally with Cayman and Matthew to see what Jeff can tackle.
Rahul:
- Lodestar PR: deleting shuffling caching from epoch.
- Lodestar-Z PR: removing mutation transition code.
Vedant:
- Code cleanup focus: comments, logging fixes.
- E2E testing exploration: basic tests written and working.
- Backfill thread documentation: scenarios and edge cases outlined.
- Next steps: 7-10 days additional testing; call with Matt scheduled after.
Nazar:
PNPM Migration:
- Technically ready now (can switch immediately).
- Minor blocker: GitHub CI runner cache issue with node_modules (affecting CI time slightly).
- Debugging completed; will finalize by today.
- If only CI issue, proceeding anyway.
Fast Confirmation Rule:
- Prototype code complete for fork choice changes.
- PR ready by early next week.
Zappy/Zig Design:
- Discussing tree-view design approach with Cayman offline.
Yarn to PNPM Commands/Migration Notes:
- Commands identical (
yarn buildβpnpm build;yarn installβpnpm install). - Only difference:
pnpmrequires explicitpnpm install(yarn auto-installed). - Playbook PR ready for infrastructure updates.
- Corepack version pinning works; pnpm warns if package.json version older but still compatible.
- Robby: Schedule 45-min 1:1s with all team members this week (EST availability).
- Phil: Finalize Zig roadmapping call for January 15th.
- Matthew: Capture Quic core dump this week; schedule planning with Cayman on timeline.
- Cayman & Nazar: Discuss ZAPI higher-level interface design offline.
- Kai, Tuyen, Cayman: Resolve tree-view design over next 2 weeks; target Jan 31 for beacon state integration.
- Tuyen: Continue state transition interface definition & refactoring (parallel work possible).
- Bing: Prepare metrics testing with ERA file data.
- Nazar: Finalize PNPM migration (announce in chat if proceeding this week).
- Matt & Cayman: Prepare tree-view demo for January 15th call.
- Vedant: Complete 7-10 days testing; schedule call with Matt after.
- All: Expect PNPM command changes; watch chat for announcement.