Skip to content

fix: data race in BenchmarkConnectionMainReadLoop#1040

Merged
anacrolix merged 3 commits intoanacrolix:masterfrom
Giulio2002:fix-benchmark-race
Feb 24, 2026
Merged

fix: data race in BenchmarkConnectionMainReadLoop#1040
anacrolix merged 3 commits intoanacrolix:masterfrom
Giulio2002:fix-benchmark-race

Conversation

@Giulio2002
Copy link
Contributor

Problem

BenchmarkConnectionMainReadLoop fails with a data race on all CI platforms (stable/ubuntu, stable/macos, oldstable/ubuntu):

Previous write at 0x00c0001b2958 by goroutine 13499:
  github.com/RoaringBitmap/roaring.(*roaringArray).insertNewKeyValueAt()
  github.com/RoaringBitmap/roaring.(*Bitmap).Add()
  github.com/anacrolix/torrent.BenchmarkConnectionMainReadLoop()
      peerconn_test.go:123

Goroutine 13541 (running) created at:
  time.goFunc()

Cause

t._pendingPieces.Add(0) at line 123 is called without holding the client lock, racing with timer goroutines spawned by the torrent client internals that also access the roaring bitmap.

Fix

Wrap _pendingPieces.Add(0) with cl.lock()/cl.unlock() to match the locking discipline used elsewhere in the benchmark (e.g. pendAllChunkSpecs is called inside cl.lock()).

@Giulio2002
Copy link
Contributor Author

This just *should fix your CI. you are welcome

@Giulio2002
Copy link
Contributor Author

Thank Sharov Bot for this

_pendingPieces.Add(0) was called without holding the client lock and
without updating the piece request order, causing two issues:
1. Data race with timer goroutines accessing the roaring bitmap
2. Panic in checkPendingPiecesMatchesRequestOrder due to _pendingPieces
   and requestOrder being out of sync

Replace the direct bitmap manipulation with updatePiecePriority(0, ...)
which properly updates both _pendingPieces and the piece request order
while holding the client lock.
@anacrolix anacrolix merged commit fab369a into anacrolix:master Feb 24, 2026
8 of 9 checks passed
anacrolix added a commit that referenced this pull request Feb 24, 2026
_pendingPieces.Add(0) was called without holding the client lock and
without updating the piece request order, causing two issues:
1. Data race with timer goroutines accessing the roaring bitmap
2. Panic in checkPendingPiecesMatchesRequestOrder due to _pendingPieces
   and requestOrder being out of sync

Replace the direct bitmap manipulation with updatePiecePriority(0, ...)
which properly updates both _pendingPieces and the piece request order
while holding the client lock.

Co-authored-by: info@weblogix.biz <admin@10gbps.weblogix.it>
Co-authored-by: Matt Joiner <anacrolix@gmail.com>
anacrolix added a commit that referenced this pull request Feb 26, 2026
* fix: race condition on connIdIssued in Client.request

The write to connIdIssued on error response (Connection ID mismatch)
was not protected by the client mutex, racing with concurrent
requestWriter goroutines that read connIdIssued via writeRequest ->
connect -> shouldReconnect (which do hold the lock).

Fixes erigontech/erigon#18901

* Update the golangci-lint workflow

I followed the instructions at https://github.com/golangci/golangci-lint-action.

* fix: data race in BenchmarkConnectionMainReadLoop (#1040)

_pendingPieces.Add(0) was called without holding the client lock and
without updating the piece request order, causing two issues:
1. Data race with timer goroutines accessing the roaring bitmap
2. Panic in checkPendingPiecesMatchesRequestOrder due to _pendingPieces
   and requestOrder being out of sync

Replace the direct bitmap manipulation with updatePiecePriority(0, ...)
which properly updates both _pendingPieces and the piece request order
while holding the client lock.

Co-authored-by: info@weblogix.biz <admin@10gbps.weblogix.it>
Co-authored-by: Matt Joiner <anacrolix@gmail.com>

---------

Co-authored-by: Matt Joiner <anacrolix@gmail.com>
Co-authored-by: info@weblogix.biz <admin@10gbps.weblogix.it>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants