Skip to content

server: fix peer add/done race between peerHandler and syncManager#2480

Open
Aharonee wants to merge 3 commits intobtcsuite:masterfrom
Aharonee:bugfix/peer_race_condition
Open

server: fix peer add/done race between peerHandler and syncManager#2480
Aharonee wants to merge 3 commits intobtcsuite:masterfrom
Aharonee:bugfix/peer_race_condition

Conversation

@Aharonee
Copy link
Contributor

@Aharonee Aharonee commented Feb 12, 2026

Summary

Fix a race condition where the sync manager can permanently get stuck with a dead sync peer after rapid peer connect/disconnect cycles.

The Race Condition

peerDoneHandler ran as a separate goroutine per peer and independently notified two event loops about a disconnect:

  1. It sent to the donePeers channel (consumed by peerHandler).
  2. It called syncManager.DonePeer() directly (sends to sm.msgChan, consumed by blockHandler).

Meanwhile, peerHandler only called syncManager.NewPeer() when it processed the newPeers channel. Because these two paths were unsynchronized, blockHandler could observe DonePeer before NewPeer for the same peer.

A second vector existed even if DonePeer were moved into peerHandler: two separate buffered channels (newPeers/donePeers) let Go's select randomly pick the done case before the add case when both were ready simultaneously.

A third vector existed due to negotiateTimeout: if the 30s timeout in peer.Peer.start() fired between verAckReceived = true and the OnVerAck callback completing, peerDoneHandler could observe VerAckReceived() == true and send peerDone before the OnVerAck callback sent peerAdd.

Consequences: The sync manager receives DonePeer for an unknown peer (logged as a warning, no cleanup). Then NewPeer arrives for the already-dead peer -- the sync manager registers it as a candidate and potentially selects it as syncPeer. Since it is already disconnected, no subsequent DonePeer arrives to clear it. The node is stuck: it believes it has a sync peer, ignores new candidates, and never makes chain progress.

What Triggers It

Any scenario that produces rapid connect/disconnect cycles:

  • Attacker traffic (connections that complete the version/verack handshake then immediately drop)
  • Flaky network conditions with many short-lived peers
  • High peer churn under load (e.g., maxpeers limit causing immediate disconnects)

The Fix

Three structural changes eliminate all race vectors:

  1. Merge newPeers and donePeers into a single peerLifecycle channel. A single FIFO channel eliminates the select-ambiguity vector where Go's select could pick done before add.

  2. Move syncManager.DonePeer() and orphan eviction into handleDonePeerMsg. All sync manager notifications now flow through the peerHandler goroutine.

  3. Make peerLifecycleHandler (renamed from peerDoneHandler) the sole sender of both peerAdd and peerDone for each peer. OnVerAck no longer sends to the channel directly; it closes a signal channel (verAckCh). peerLifecycleHandler selects on verAckCh vs peer.Done() (new method exposing the peer's quit channel), sends peerAdd if verack was received, then waits for disconnect and sends peerDone. Because both sends originate from the same goroutine, ordering is guaranteed by construction -- no cross-goroutine synchronization or bookkeeping needed.

Reproducing on master (without the fix)

The included integration test can demonstrate the corruption on an unpatched master branch:

git checkout master
git checkout bugfix/peer_race_condition -- integration/sync_race_test.go
go test -tags=rpctest -v -run TestSyncManagerRaceCorruption ./integration/ -count=10 -timeout 900s

Test Plan

  • go build ./... compiles cleanly
  • go test -tags=rpctest -v -run TestSyncManagerRaceCorruption ./integration/ -count=10 -timeout 900s passes
  • TestPreVerackDisconnect passes (disconnect before verack)
  • Existing integration tests unaffected

@Aharonee Aharonee force-pushed the bugfix/peer_race_condition branch from ee422e0 to 50b62a3 Compare February 12, 2026 10:18
@coveralls
Copy link

coveralls commented Feb 12, 2026

Pull Request Test Coverage Report for Build 22067556256

Details

  • 0 of 41 (0.0%) changed or added relevant lines in 2 files are covered.
  • 74 unchanged lines in 2 files lost coverage.
  • Overall coverage decreased (-0.1%) to 54.846%

Changes Missing Coverage Covered Lines Changed/Added Lines %
peer/peer.go 0 3 0.0%
server.go 0 38 0.0%
Files with Coverage Reduction New Missed Lines %
database/ffldb/blockio.go 4 88.81%
rpcclient/infrastructure.go 70 42.62%
Totals Coverage Status
Change from base Build 20942501138: -0.1%
Covered Lines: 31142
Relevant Lines: 56781

💛 - Coveralls

@naorye
Copy link

naorye commented Feb 12, 2026

I experience the same issue.. Wow, really need this.

@yyforyongyu yyforyongyu self-requested a review February 13, 2026 02:48
server.go Outdated
)

// peerLifecycleEvent represents a peer connection or disconnection event.
// Using a single channel for both event types guarantees FIFO ordering:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have the "first-in" part? Can OnVerAck be delayed and send its part after "done" event is sent? E.g. if OnVerAck runs longer than negotiateTimeout.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, there seems to still be a potential race in that scenario.
I've pushed a commit which changes the peerDoneHandler into peerLifecycleHandler, and delegates responsibility for both add peer and done peer events to it.
That way a single goroutine will manage synchronization and correct ordering of the peer lifecycle events.

Does that make sense?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change looks good to me!

peerDoneHandler ran as a separate goroutine per peer and independently
notified both peerHandler (via donePeers channel) and the sync manager
(via syncManager.DonePeer) about a peer disconnect. Because these two
sends were unsynchronized, the sync manager could observe DonePeer
before NewPeer when a peer connected and disconnected quickly. This
caused the sync manager to log "unknown peer", then later register the
already-dead peer as a sync candidate that was never cleaned up,
potentially leaving it stuck with a dead sync peer.

Two structural changes eliminate the race:

1. Merge the newPeers and donePeers channels into a single
   peerLifecycle channel. Since OnVerAck (add) always fires before
   WaitForDisconnect returns (done), a single FIFO channel guarantees
   peerHandler always processes add before done for a given peer,
   removing the select-ambiguity where Go could pick done first.

2. Move the syncManager.DonePeer call and orphan eviction from
   peerDoneHandler into handleDonePeerMsg, which runs inside
   peerHandler. All sync manager peer lifecycle notifications now
   originate from the single peerHandler goroutine and flow into
   sm.msgChan in guaranteed add-before-done order.
@Aharonee Aharonee force-pushed the bugfix/peer_race_condition branch from 50b62a3 to 091b790 Compare February 16, 2026 09:46
Address review feedback on the peer add/done race fix:

- Make peerLifecycleHandler (renamed from peerDoneHandler) the sole
  sender of both peerAdd and peerDone events for each peer. OnVerAck
  now closes a signal channel (verAckCh) instead of sending directly,
  and peerLifecycleHandler selects on verAckCh vs peer.Done() to
  decide whether to send peerAdd before peerDone. This guarantees
  ordering by construction: a single goroutine sends both events
  sequentially, eliminating the negotiateTimeout race window.

- Add Done() method to peer.Peer exposing the quit channel read-only,
  enabling select-based disconnect detection from server code.

- Remove the now-unused AddPeer method.

- Address style feedback: 80-char line limit, empty lines between
  switch cases, break long function calls, use require.GreaterOrEqualf
  instead of if+Fatalf, bump syncRaceConcurrency to 300 for
  backpressure testing, add TestPreVerackDisconnect for disconnect
  prior to verack.
@Aharonee Aharonee requested a review from starius February 16, 2026 15:04
Comment on lines +2315 to +2327
// peerAdd is always enqueued before peerDone.
func (s *server) peerLifecycleHandler(sp *serverPeer) {
// Wait for the handshake to complete or the peer to
// disconnect, whichever comes first.
select {
case <-sp.verAckCh:
s.peerLifecycle <- peerLifecycleEvent{
action: peerAdd, sp: sp,
}

case <-sp.Peer.Done():
// Disconnected before verack; no peerAdd needed.
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If both sp.verAckCh and sp.Peer.Done() have messages to receive, select chooses pseudorandomly among them. So peerAdd can be skipped even if VerAckReceived is true, and handleDonePeerMsg will call DonePeer for an unknown peer.

Does it make sense to prioritize receiving from sp.verAckCh or check VerAckReceived if sp.Peer.Done() fired?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should be fine to skip add peer if done peer event has already occurred.
After all, the peer has disconnected so we can avoid notifying the server of a new peer just to notify it right away after to remove it.

My main concern was done peer being processed before add peer, but done peer processing for an unknown peer that has already disconnected seems harmless.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right. My proposal would only improve log message clarity (avoiding "unknown peer" being logged), not the correctness of the code itself. It is optional.


const (
peerAdd peerLifecycleAction = iota
peerAdd peerLifecycleAction = iota
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this formatting change should belong to the first commit

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can squash both commits and force push if you prefer, but wouldn't it be more convenient for you to review the diff each time and only squash merge at the end?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, let's keep them separate for now.

server.go Outdated
Comment on lines 560 to 561
close(sp.verAckCh)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the current code version allows calling OnVerAck only once. Should we safeguard for the future using sync.Once?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think safeguarding this could potentially hide a bug, and if it is called twice we would prefer a loud panic.
This pattern is also a consistent pattern used in the codebase, for example: Peer.quit channel is not safeguarded and is closed by Peer.Disconnect().

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, maybe we can produce an error instead, if it is closed already?

select {
case <-sp.verAckCh:
 log Error

default:
 close(sp.verAckCh)
}

The error won't let it pass unnoticed, but at least it won't panic and crash. What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, makes sense

server.go Outdated
Comment on lines 164 to 165
// goroutine (peerLifecycleHandler), guaranteeing that peerAdd is
// always enqueued before peerDone.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I propose to adjust the comment to reflect that peerAdd may be skipped.

server.go Outdated
knownAddresses lru.Cache
banScore connmgr.DynamicBanScore
quit chan struct{}
verAckCh chan struct{} // closed when OnVerAck fires
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Formatting:

// Closed when OnVerAck fires.
verAckCh chan struct{}

server.go Outdated
)

// peerLifecycleEvent represents a peer connection or disconnection event.
// Using a single channel for both event types guarantees FIFO ordering:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change looks good to me!

Prioritize verAckCh in peerLifecycleHandler select to avoid
nondeterministic peerAdd skipping when both channels are ready.

Guard OnVerAck against double-close by checking the channel before
closing, logging an error instead of panicking.

Adjust peerLifecycleEvent comment to reflect that peerAdd may be
skipped when the peer disconnects before or concurrently with verack.

Fix verAckCh field comment formatting.
@Aharonee Aharonee requested a review from starius February 18, 2026 12:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants

Comments