Skip to content

eth: add partial statefulness mode via EIP-7928#33764

Draft
CPerezz wants to merge 27 commits intoethereum:bal-devnet-2from
CPerezz:partial-state
Draft

eth: add partial statefulness mode via EIP-7928#33764
CPerezz wants to merge 27 commits intoethereum:bal-devnet-2from
CPerezz:partial-state

Conversation

@CPerezz
Copy link
Copy Markdown
Contributor

@CPerezz CPerezz commented Feb 5, 2026

Summary

This PR adds partial statefulness to geth via EIP-7928 (Block Access Lists). A partial state node stores all accounts but only syncs and tracks storage & bytecode for a configured set of contracts, reducing disk usage from ~640 GiB to ~59 GiB if no contracts are set.

This is the minimal set of client infra required to:

  • Provide ethGetProof support for accounts or a set of choosen contracts (usually will be "hot state"). - This triggers a much easier integration of RPC markets served by validator nodes for example.
  • Serve as base for VOPS nodes.

Notice that this kind of node (until ZKEVMs are a reality) relies on 2 things to make sure that state root is correct:

  1. Signing commitee verification (same as lightclients).
  2. Account trie (thus, top part of state tree) root recomputation sourcing data from BALs and recomputing state root matching the one announced in the block.

Key changes

  • Partial sync mode (--partial-state): snap syncer skips storage and bytecode for untracked contracts using skip markers, while syncing all accounts normally
  • Block Access Lists: engine API (NewPayloadV5) accepts BAL data from the CL, enabling partial nodes to apply state updates for tracked contracts without full execution
  • Chain retention (--partial-state.chain-retention): only retains the last N blocks (default 1024) of bodies/receipts. During sync, older blocks are never downloaded. After sync, the freezer enforces a rolling window.
  • RPC awareness: eth_getStorageAt, eth_getCode, eth_call, and eth_estimateGas return clear errors for untracked contracts instead of silent zero values
  • BAL processing: reorg handling with configurable BAL retention, storage root tracking for tracked contracts

Commits

Commit Description
Phase 1 Foundation: config, CLI flags, contract filter, partial state manager
Phase 2 Snap sync: skip markers for untracked storage/bytecode, hash-based filtering
Phase 3 BAL processing: apply storage updates from Block Access Lists, reorg handling
Phase 4 RPC layer: partial state awareness in eth_call, eth_estimateGas, eth_getStorageAt
Latest Chain retention, engine API BAL support, beacon backfill fix

@CPerezz
Copy link
Copy Markdown
Contributor Author

CPerezz commented Feb 5, 2026

Still under testing and polishing.

Syncing output example:

INFO [02-05|12:36:59.877] Forkchoice requested sync to new head    number=24,390,376 hash=b77165..58c4cf finalized=24,390,301
INFO [02-05|12:37:02.362] Syncing: chain download in progress      synced=100.00% chain=12.31GiB headers=24,390,288@11.93GiB bodies=24,390,288@211.48MiB receipts=24,390,288@181.60MiB eta=39.745s
INFO [02-05|12:37:02.599] Syncing: partial state download in progress synced=45.63%  state=33.27GiB  accounts=162,584,180@33.27GiB slots=0@0.00B slotsSkipped=11,503,971 codes=0@0.00B codesSkipped=30,822,889 eta=32m37.090s
INFO [02-05|12:37:10.473] Syncing: chain download in progress      synced=100.00% chain=12.31GiB headers=24,390,288@11.93GiB bodies=24,390,288@211.48MiB receipts=24,390,288@181.60MiB eta=40.410s
INFO [02-05|12:37:10.608] Syncing: partial state download in progress synced=45.97%  state=33.52GiB  accounts=163,825,342@33.52GiB slots=0@0.00B slotsSkipped=11,591,828 codes=0@0.00B codesSkipped=31,058,593 eta=31m10.806s

@CPerezz
Copy link
Copy Markdown
Contributor Author

CPerezz commented Feb 9, 2026

Sync seems to be working now. Some issues with peering, but at this point we arrive to healing, and to finalize the final gap with HEAD, we actually need to rebase on top of bal-devnet-2. HERE WE GO!

Next steps:

  • Integrate a node within BAL-devnet-2. And further test there that this works.

@CPerezz CPerezz changed the base branch from master to bal-devnet-2 February 9, 2026 14:29
@CPerezz
Copy link
Copy Markdown
Contributor Author

CPerezz commented Feb 9, 2026

  • Our branch added a separate accessList parameter to newPayload() in eth/catalyst/api.go. So dropped the tmp structs we had created.
  • Our branch stored the BAL as hexutil.Bytes and now we take upstream's typed field.
  • Adapted all our code to upstream's BlockAccessList.
  • Created a testBALBuilder type in our test file that provides the same convenience API.

I did a backup branch to preserve the OG changes in case I screwed with the rebase.

Implements EIP-7928 BAL-based partial statefulness infrastructure:

- Add PartialStateConfig to eth/ethconfig with CLI flags
- Add ContractFilter interface in core/state/partial/
- Add BAL history database accessors in core/rawdb/
- Add PartialState and BALHistory managers

This enables nodes to track only configured contracts' storage
while maintaining full account trie integrity.
Extends ContractFilter interface with hash-based methods (ShouldSyncStorageByHash,
ShouldSyncCodeByHash) for efficient filtering during snap sync when only account
hashes are available.

Adds NewPartialStateSync() function that accepts filter callbacks to control which
accounts have their storage/code synced during healing. This prevents the healing
phase from re-syncing storage for accounts that were intentionally skipped during
initial sync.

Part of partial statefulness Phase 2.
Passes the partial statefulness filter from Ethereum backend through
the handler config and into the downloader. The filter is then passed
to the snap syncer to enable selective storage/code syncing.

Updates downloader tests to accommodate the new filter parameter.

Part of partial statefulness Phase 2.
Adds partial sync mode to the snap syncer that filters which contracts
have their storage and bytecode synced based on the configured filter.

Key changes:
- Syncer accepts optional ContractFilter for partial mode
- Skip markers (SnapSkipped prefix) track intentionally skipped accounts
- processAccountResponse checks filter before requesting storage/code
- Healing phase uses NewPartialStateSync to respect skip markers
- Helper functions for skip marker persistence (mark/check/delete)

When partial sync is active, only tracked contracts have their storage
synced, reducing sync size from ~1TB+ to ~30-40GB while maintaining
a complete account trie for balance queries.

Part of partial statefulness Phase 2.
Comprehensive integration tests using mock peers that verify partial
sync behavior end-to-end:

- TestPartialSyncIntegration: Full sync with 20 accounts, 2 tracked
- TestPartialSyncAllAccounts: Verifies complete account trie synced
- TestPartialSyncSkipMarkers: Verifies skip markers written correctly
- TestPartialSyncNoStorageForUntracked: No storage for skipped accounts
- TestPartialSyncRequestCount: Diagnostic showing request filtering
- TestPartialSyncVsFullSync: Compares full vs partial, shows 83% reduction

Level 2 validation was also performed using a two-node local devnet
(full node + partial node) to verify database size reduction and
correct RPC responses. The mock peer tests provide equivalent coverage
with faster execution and CI compatibility.

Part of partial statefulness Phase 2.
Implement Block Access List (BAL) processing for partial statefulness
per EIP-7928. This enables nodes to update state without re-executing
transactions by applying BAL diffs directly to the trie.

Key additions:
- ApplyBALAndComputeRoot: Core BAL processing with correct commit ordering
  (storage trie → account Root → account trie)
- ProcessBlockWithBAL: Blockchain-level entry point for BAL processing
- HandlePartialReorg: Chain reorganization support using BAL history
- Comprehensive test coverage (31 tests):
  * Unit tests for edge cases (storage deletion, EIP-161, buildStateSet)
  * Blockchain integration tests (ProcessBlockWithBAL, HandlePartialReorg)
  * Both HashScheme and PathScheme coverage

Devnet Testing (2-node setup):
- Full node: dev mode with --dev.period 2, creates blocks
- Partial node: --partial-state mode, syncs via P2P
- Test results: Block sync verified, balance queries match between nodes,
  state roots consistent. Database size reduction observed for partial node.
Add partial state mode support to the RPC API. In partial state mode:
- Account queries (balance, nonce, account proofs) work for ALL accounts
- Storage/code queries only work for tracked contracts
- Clear error codes help clients understand limitations

Changes:
- New error types: StorageNotTrackedError (-32001), CodeNotTrackedError (-32002)
- Backend interface: PartialStateEnabled(), IsContractTracked()
- Modified RPCs: GetStorageAt, GetCode, GetProof check tracked status
- 7 new tests verify correct behavior for tracked/untracked contracts
Add chain retention for partial state mode: only the most recent N blocks
(default 1024) retain bodies and receipts. During sync, older blocks are
skipped entirely. After sync, the freezer enforces a rolling window.

Add engine API support for Block Access Lists (EIP-7928): NewPayloadV5
accepts BAL data alongside execution payloads, enabling partial state
nodes to receive per-block storage access information from the CL.

Fix beacon backfilling failure caused by dynamic chain cutoff not
clearing the cutoff hash (which remained at the genesis hash).

Add partial state awareness to eth_call/eth_estimateGas to return clear
errors when accessing untracked contract storage.
LoadPartialStateContracts() was only called from Validate() which was
never invoked, causing the contracts file to never be loaded. Call it
directly during Ethereum node initialization when partial state is
enabled.
After an unclean shutdown, Disable() is called twice which is expected
behavior. The second call was logging at ERROR level, which was
misleading. Downgrade to INFO since this is a normal occurrence.
Partial state nodes don't need snapshots since account data is read
directly from the trie (which is small enough for fast lookups) and
BAL-based block processing never uses snapshots.

- Set SnapshotCache to 0 when partial state is enabled (flags.go)
- Allow snap sync without snapshots for partial state mode (handler.go)
- Add nil-check for Snapshots() in snap request handlers to prevent
  panics when serving HashScheme peers (snap/handler.go)
Refactor partial state filter from DB skip markers to direct filter
checks via shouldSyncStorage()/shouldSyncCode(), avoiding stale marker
issues across sync cycles.

Additional fixes:
- Skip WriteAccountSnapshot/WriteStorageSnapshot in partial mode
  (forwardAccountTask, processStorageResponse, onHealState)
- Guard against negative ETA in reportSyncProgress when sync restarts
  with persisted progress counters
- Add break after forwardAccountTask in cleanStorageTasks to prevent
  nil pointer when task.res is cleared
- Add diagnostic log in assignAccountTasks when no idle peers available
…uards

Freeze the pivot header for partial state nodes to ensure stable state
sync progress:
- Suppress pivot movement in fetchHeaders() (beaconsync.go)
- Suppress pivot movement in processSnapSyncContent() (downloader.go)
- Reuse existing pivot across sync cycle restarts in syncToHead()

After initial snap sync completes, bridge the gap from pivot to HEAD:
- Import post-pivot blocks with receipts (no execution needed since
  untracked contracts have empty storage tries)
- Run second state sync to download HEAD state root
- Add AdvancePartialHead to update currentBlock without re-execution

Guard the backfiller for partial state mode:
- suspend() skips Cancel() during active snap sync to prevent
  constant cancel/restart cycles from beacon head updates
- resume() skips new sync cycles after partial sync completes
The statelessPeers map permanently blacklists peers that return empty
responses for the entire Sync() cycle. In partial state mode, the faster
account advancement (due to skipping storage/code for non-tracked
contracts) creates bursty request patterns that can trigger transient
empty responses. Combined with the permanent blacklist, this causes a
cascade where all peers get banned and sync stalls permanently.

Replace the permanent map[string]struct{} with map[string]time.Time to
track when each peer was marked. For partial state mode, peers are given
a 30-second cooldown instead of permanent banishment. After the cooldown
expires, the peer is eligible for task assignment again. Full sync mode
behavior is unchanged (permanent blacklist preserved).
All five revert*Request functions (account, bytecode, storage,
trienode heal, bytecode heal) remove the request from the tracked
set but never restore the peer to its corresponding idle pool.
When a request times out and no response arrives, the peer is
permanently lost from the idle pool, preventing new work from
being assigned to it.

In vanilla geth this bug is masked by pivot movement (which
resets idle pools via new Sync() cycles) and peer churn. In
partial state mode with a frozen pivot, the same Sync() cycle
runs for hours, causing all peers to eventually leak out of the
idle pools and stalling sync.

Fix: after deleting from the request map, restore the peer to
its idle pool if it is still registered (guards against the
peer-drop path where Unregister already removed the peer).
This mirrors the pattern used in all five On* response handlers.
CPerezz and others added 3 commits February 14, 2026 11:10
Geth has two independent snapshot tiers, each with its own disable
mechanism:

1. In-memory snapshot cache: controlled by SnapshotLimit (derived from
   ethconfig.SnapshotCache). Setting SnapshotCache=0 disables it.

2. On-disk snapshot generator: a background goroutine in pathdb that
   iterates the entire state trie to build flat key-value snapshots.
   Controlled by pathdb.Config.SnapshotNoBuild.

The partial state configuration (cmd/utils/flags.go) already set
SnapshotCache=0 to disable the in-memory cache. However, SnapshotNoBuild
was never set, so pathdb.Enable() — called after snap sync completes —
still launched the background generator goroutine.

This generator immediately hits missing storage tries for untracked
contracts (whose storage was intentionally skipped during partial sync),
logs "Trie missing, snapshotting paused", and blocks forever on its
abort channel — a permanent goroutine leak with no recovery path.

Additionally, BlockChainConfig.SnapshotNoBuild was never propagated to
pathdb.Config.SnapshotNoBuild in the triedbConfig() conversion. The
field only reached the hash-scheme snapshot module (core/blockchain.go
setupSnapshot), which is already skipped for path-scheme databases. This
plumbing gap meant pathdb.Config.SnapshotNoBuild was never set in
production code — only in tests.

Fix both issues:
- Set SnapshotNoBuild=true when partial state is enabled
- Propagate BlockChainConfig.SnapshotNoBuild into pathdb.Config
Fix the post-sync deadlock where blocks validated via BAL in newPayload
were never written to the database, causing ForkchoiceUpdated to fail
finding them and triggering infinite sync cycles.

Changes:
- Export WriteBlockWithoutState and call it after ProcessBlockWithBAL
  in newPayload, so FCU can find blocks via GetBlockByHash
- Guard SetCanonical against recoverAncestors for partial state nodes
  (they can't re-execute blocks, only apply BAL diffs)
- Auto-disable log indexing when partial state is enabled (no receipts)
- Fix BAL type field accesses to match upstream bal-devnet-2 types
  (StorageChanges, CodeChanges, BalanceChanges, Validate signature)
- Update newPayload signature (BAL now comes from ExecutableData params)
- Add partial sync scripts and documentation

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
CopyHeader copies all pointer-typed header fields (WithdrawalsHash,
RequestsHash, SlotNumber, etc.) but was missing the deep copy for
BlockAccessListHash added by EIP-7928. This caused the BAL hash
to be silently shared between the original and the copy, leading
to potential data races and incorrect nil-checks on copied headers.
Fix several interacting issues that prevented partial state nodes from
syncing and following the chain on bal-devnet-2:

1. Stale pivot deadlock: Replace unconditional pivot suppression with
   rate-limited advances (2-minute cooldown). This prevents the restart
   loop bug while allowing recovery when the initial pivot is too stale
   for peers to serve.

2. Storage root resolution: Add snap-based resolver that queries peers
   for untracked contracts' storage roots during BAL processing. This
   lets the computed state root converge toward the header root.

3. SetCanonical for partial state: When the computed root differs from
   the header root (expected when untracked contracts have unresolved
   storage roots), check HasState(partialState.Root()) instead of only
   HasState(block.Root()). Guard against zero root during snap sync.

4. Canonical hash backfill: AdvancePartialHead now writes canonical
   hashes for all blocks between the pivot and snap head, fixing the
   "final block not in canonical chain" error caused by
   InsertReceiptChain skipping blocks whose bodies already exist.

5. Gap block processing: After snap sync completes, process accumulated
   blocks between the sync head and chain tip using their persisted BALs
   before entering steady-state chain following.

6. Computed root chaining: Use partialState.Root() (actual computed root)
   as parentRoot for subsequent blocks, not the header root. This ensures
   correct trie chaining when computed != header root.

Tested end-to-end on bal-devnet-2: snap sync completes, gap blocks
processed, canonical head advances at chain tip (~1 block/12s).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@CPerezz
Copy link
Copy Markdown
Contributor Author

CPerezz commented Feb 17, 2026

After 944a471 I can confirm that we are syncing to bal-devnet-2 perfectly.

Notice I've had to add a Handler to request missing contract storage roots (as they don't come with the BAL) via Snapsync request.

@rjl493456442 @gballet any interest on merging this or taking a look?

CPerezz and others added 8 commits February 17, 2026 10:57
Move partial state CLI flags into their own "PARTIAL STATE" help category
(matching BEACON CHAIN, DEVELOPER CHAIN patterns), improve Usage strings
with examples and constraint descriptions, expand PartialStateConfig doc
comments to explain EIP-7928 implications, and raise BAL retention minimum
from 64 to 256 (required by BLOCKHASH opcode).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Apply review fixes: BAL iterator start (Fix 2), fatal root mismatch when
all storage resolved (Fix 3), WriteBlockWithoutState error handling (Fix 4),
contract filter construction order (Fix 5), canonical hash backfill (Fix 6),
underflow guard in gap processing (Fix 8), O(n²) prepend fix (Fix 9),
ReadBALHistory corruption detection (Fix 11), incomplete resolution error
(Fix 13), RLP encode panic (Fix 14), gap processing log level (Fix 16),
TriggerPartialResync message (Fix 18), and comment accuracy fixes.

Remove the stateRoot field and sync.RWMutex from PartialState entirely.
Since partial state maintains the full account trie, the computed root
always matches the header root (assuming storage root resolution succeeds).
ProcessBlockWithBAL now derives parent root from parent.Root() directly,
matching how full nodes derive state root from currentBlock headers.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Live testing on bal-devnet-2 confirmed that computed roots DO diverge from
header roots. Block 75315 computed root 0xe909c7.. vs header root
0x9acbbe.. — untracked contracts' storage roots in the local trie are from
snap sync time and differ from the actual current roots, even when the
storage root resolver successfully queries peers.

This means subsequent blocks must chain off the computed root (via
partialState.Root()), not the header root (via parent.Root()). Restore
the stateRoot field using atomic.Pointer[common.Hash] instead of the
previous sync.RWMutex for lock-free concurrent access.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
After the second snap sync completes, AdvancePartialHead moves the head
markers forward but never initialized partialState.Root(). This caused
ProcessBlockWithBAL to fall back to the parent's header root, which
doesn't match the computed trie root from BAL processing — resulting in
a state root mismatch on the first block after sync.

Fix: call SetRoot(root) and SetLastProcessedBlock() in AdvancePartialHead
so subsequent BAL processing chains from the correct state root.

Also add diagnostic logging to ProcessBlockWithBAL for easier debugging.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The second state sync (pivot→HEAD) determines its target using
CurrentSnapBlock(), which may equal CurrentBlock() if no afterP blocks
were processed before the queue drained. This is a timing-dependent
race: with rate-limited pivot advances, the pivot ends up close to
the CL head, so the final batch may contain zero afterP blocks,
causing CurrentSnapBlock == CurrentBlock. The check
`snapHead.Hash() != currentHead.Hash()` then fails and the second
sync is skipped entirely. Without the second sync, disableSnap()
is never called, ConfigSyncMode() stays SnapSync, and ALL subsequent
newPayload calls are delayed forever.

Fix: use the skeleton head (beacon chain tip) as the second sync
target instead of CurrentSnapBlock(). The skeleton head is always
available and correctly reflects the CL's latest finalized target,
independent of queue draining timing.

Also removes the fragile "snap head too old" and "snap head too far
behind" guards which could abort the second sync prematurely.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Trim leading zeros from storage values before passing to UpdateStorage,
matching the upstream BALStateTransition behavior. UpdateStorage
RLP-encodes the value internally, so passing untrimmed 32-byte values
(e.g. [0,0,...,5]) produces different trie nodes than trimmed values
([5]), causing systematic state root mismatches on every BAL-processed
block.

BuildStateSet already correctly trimmed values for the pathdb layer;
this fix aligns the trie update path.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The stateless block check in forkchoiceUpdated was calling BeaconSync()
on every FCU (~12 seconds) during active snap sync, restarting the
entire sync cycle each time. This prevented state download from ever
completing.

Guard the check with ConfigSyncMode: during active snap sync, the
downloader is already working, so just return STATUS_SYNCING without
restarting. Only trigger BeaconSync for stateless blocks after snap
sync has completed (FullSync mode).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Match upstream BALStateTransition behavior: only call UpdateAccount for
accounts that were actually modified (balance, nonce, code, or storage
changes). Previously, all accounts in the BAL (including read-only ones)
were written back to the trie, which could cause root mismatches if the
re-encoded RLP differed from the original encoding.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
rjl493456442 pushed a commit that referenced this pull request Feb 24, 2026
)

All five `revert*Request` functions (account, bytecode, storage,
trienode heal, bytecode heal) remove the request from the tracked set
but never restore the peer to its corresponding idle pool. When a
request times out and no response arrives, the peer is permanently lost
from the idle pool, preventing new work from being assigned to it.

In normal operation mode (snap-sync full state) this bug is masked by
pivot movement (which resets idle pools via new Sync() cycles every ~15
minutes) and peer churn (reconnections re-add peers via Register()).
However in scenarios like the one I have running my (partial-stateful
node)[#33764] with
long-running sync cycles and few peers, all peers can eventually leak
out of the idle pools, stalling sync entirely.

Fix: after deleting from the request map, restore the peer to its idle
pool if it is still registered (guards against the peer-drop path where
Unregister already removed the peer). This mirrors the pattern used in
all five On* response handlers.


This only seems to manifest in peer-thirstly scenarios as where I find
myself when testing snapsync for the partial-statefull node).
Still, thought was at least good to raise this point. Unsure if required
to discuss or not
flywukong pushed a commit to flywukong/bsc that referenced this pull request Mar 10, 2026
…790)

All five `revert*Request` functions (account, bytecode, storage,
trienode heal, bytecode heal) remove the request from the tracked set
but never restore the peer to its corresponding idle pool. When a
request times out and no response arrives, the peer is permanently lost
from the idle pool, preventing new work from being assigned to it.

In normal operation mode (snap-sync full state) this bug is masked by
pivot movement (which resets idle pools via new Sync() cycles every ~15
minutes) and peer churn (reconnections re-add peers via Register()).
However in scenarios like the one I have running my (partial-stateful
node)[ethereum/go-ethereum#33764] with
long-running sync cycles and few peers, all peers can eventually leak
out of the idle pools, stalling sync entirely.

Fix: after deleting from the request map, restore the peer to its idle
pool if it is still registered (guards against the peer-drop path where
Unregister already removed the peer). This mirrors the pattern used in
all five On* response handlers.


This only seems to manifest in peer-thirstly scenarios as where I find
myself when testing snapsync for the partial-statefull node).
Still, thought was at least good to raise this point. Unsure if required
to discuss or not
flywukong pushed a commit to flywukong/bsc that referenced this pull request Mar 10, 2026
…790)

All five `revert*Request` functions (account, bytecode, storage,
trienode heal, bytecode heal) remove the request from the tracked set
but never restore the peer to its corresponding idle pool. When a
request times out and no response arrives, the peer is permanently lost
from the idle pool, preventing new work from being assigned to it.

In normal operation mode (snap-sync full state) this bug is masked by
pivot movement (which resets idle pools via new Sync() cycles every ~15
minutes) and peer churn (reconnections re-add peers via Register()).
However in scenarios like the one I have running my (partial-stateful
node)[ethereum/go-ethereum#33764] with
long-running sync cycles and few peers, all peers can eventually leak
out of the idle pools, stalling sync entirely.

Fix: after deleting from the request map, restore the peer to its idle
pool if it is still registered (guards against the peer-drop path where
Unregister already removed the peer). This mirrors the pattern used in
all five On* response handlers.


This only seems to manifest in peer-thirstly scenarios as where I find
myself when testing snapsync for the partial-statefull node).
Still, thought was at least good to raise this point. Unsure if required
to discuss or not
allformless pushed a commit to bnb-chain/bsc that referenced this pull request Mar 11, 2026
…790) (#3587)

All five `revert*Request` functions (account, bytecode, storage,
trienode heal, bytecode heal) remove the request from the tracked set
but never restore the peer to its corresponding idle pool. When a
request times out and no response arrives, the peer is permanently lost
from the idle pool, preventing new work from being assigned to it.

In normal operation mode (snap-sync full state) this bug is masked by
pivot movement (which resets idle pools via new Sync() cycles every ~15
minutes) and peer churn (reconnections re-add peers via Register()).
However in scenarios like the one I have running my (partial-stateful
node)[ethereum/go-ethereum#33764] with
long-running sync cycles and few peers, all peers can eventually leak
out of the idle pools, stalling sync entirely.

Fix: after deleting from the request map, restore the peer to its idle
pool if it is still registered (guards against the peer-drop path where
Unregister already removed the peer). This mirrors the pattern used in
all five On* response handlers.


This only seems to manifest in peer-thirstly scenarios as where I find
myself when testing snapsync for the partial-statefull node).
Still, thought was at least good to raise this point. Unsure if required
to discuss or not

Co-authored-by: CPerezz <37264926+CPerezz@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants