Skip to content

Conversation

@ericywl
Copy link
Contributor

@ericywl ericywl commented Dec 19, 2025

Summary

Fix potential data race between WriteTraceEvent in ProcessBatch and ReadTraceEvent in the sampling goroutine. Closes #17772.

Performance

Baseline

goos: darwin
goarch: arm64
pkg: github.com/elastic/apm-server/x-pack/apm-server/sampling
cpu: Apple M4 Pro
BenchmarkProcess-14              3096828               357.5 ns/op
BenchmarkProcess-14              3284749               359.1 ns/op
BenchmarkProcess-14              3191538               353.3 ns/op
BenchmarkProcess-14              3333675               345.3 ns/op
BenchmarkProcess-14              3331615               345.4 ns/op
BenchmarkProcess-100             3439828               344.9 ns/op
BenchmarkProcess-100             3526257               325.7 ns/op
BenchmarkProcess-100             3461000               322.2 ns/op
BenchmarkProcess-100             3480249               394.0 ns/op
BenchmarkProcess-100             2877158               608.7 ns/op

Single Mutex

goos: darwin
goarch: arm64
pkg: github.com/elastic/apm-server/x-pack/apm-server/sampling
cpu: Apple M4 Pro
BenchmarkProcess-14              3323389               337.5 ns/op
BenchmarkProcess-14              3482792               347.8 ns/op
BenchmarkProcess-14              3349486               333.5 ns/op
BenchmarkProcess-14              3437163               334.9 ns/op
BenchmarkProcess-14              3362293               333.8 ns/op
BenchmarkProcess-100             3516130               331.9 ns/op
BenchmarkProcess-100             3251674               339.8 ns/op
BenchmarkProcess-100             3432068               333.6 ns/op
BenchmarkProcess-100             3561802               368.2 ns/op
BenchmarkProcess-100             3290845               416.4 ns/op

ShardLockReadWriter

goos: darwin
goarch: arm64
pkg: github.com/elastic/apm-server/x-pack/apm-server/sampling
cpu: Apple M4 Pro
BenchmarkProcess-14              3272188               346.3 ns/op
BenchmarkProcess-14              3415772               330.7 ns/op
BenchmarkProcess-14              3487447               333.2 ns/op
BenchmarkProcess-14              3470158               337.4 ns/op
BenchmarkProcess-14              3467367               338.6 ns/op
BenchmarkProcess-100             3626730               319.7 ns/op
BenchmarkProcess-100             3722044               369.1 ns/op
BenchmarkProcess-100             3123934               349.0 ns/op
BenchmarkProcess-100             3771914               381.9 ns/op
BenchmarkProcess-100             3462182               340.4 ns/op

ShardLockReadWriter with RWMutex

goos: darwin
goarch: arm64
pkg: github.com/elastic/apm-server/x-pack/apm-server/sampling
cpu: Apple M4 Pro
BenchmarkProcess-14              3384834               343.1 ns/op
BenchmarkProcess-14              3284800               338.6 ns/op
BenchmarkProcess-14              3483038               348.2 ns/op
BenchmarkProcess-14              3217903               343.9 ns/op
BenchmarkProcess-14              3444128               334.4 ns/op
BenchmarkProcess-100             3115171               328.8 ns/op
BenchmarkProcess-100             3383545               329.2 ns/op
BenchmarkProcess-100             3069316               328.6 ns/op
BenchmarkProcess-100             3483396               334.0 ns/op
BenchmarkProcess-100             3359440               345.3 ns/op

@ericywl ericywl self-assigned this Dec 19, 2025
@ericywl ericywl changed the title Add test confirming the potential data race tbs: Add test confirming the potential data race Dec 19, 2025
@github-actions
Copy link
Contributor

🤖 GitHub comments

Just comment with:

  • run docs-build : Re-trigger the docs validation. (use unformatted text in the comment!)

@mergify
Copy link
Contributor

mergify bot commented Dec 19, 2025

This pull request does not have a backport label. Could you fix it @ericywl? 🙏
To fixup this pull request, you need to add the backport labels for the needed
branches, such as:

  • backport-7.17 is the label to automatically backport to the 7.17 branch.
  • backport-8./d is the label to automatically backport to the 8./d branch. /d is the digit.
  • backport-9./d is the label to automatically backport to the 9./d branch. /d is the digit.
  • backport-active-all is the label that automatically backports to all active branches.
  • backport-active-8 is the label that automatically backports to all active minor branches for the 8 major.
  • backport-active-9 is the label that automatically backports to all active minor branches for the 9 major.

Copy link
Member

@carsonip carsonip left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. The setup looks about right, but the specific case that needs to be tested is not what I'm thinking about.

In this PR, there are txn1 and txn2 both part of trace1, which don't make sense to me. 1 trace should only have 1 root transaction. In your current setup, what should happen to txn2 is undefined.

To be clearer, what I'd like to test is similar: txn1 is root txn of trace1, and txn2 is a child of txn1.

At time t1: apm server receives txn1
t2: background sampling goroutine: apm server makes sampling decision for txn1
t2': apm server receives txn2
t3: background sampling goroutine: marks trace1 as sampled

^ the above is a race, because apm server receives txn2 between t2 and t3, and the result is txn2 is lost forever. If it happens either before t2 or after t3, txn2 is exported correctly.

It gets a bit theoretical but I believe it is possible. Lmk if you have any questions.

@ericywl ericywl force-pushed the tbs-potential-data-race branch from 1e2035e to 8f27d8c Compare December 22, 2025 05:12
@ericywl ericywl force-pushed the tbs-potential-data-race branch from 675d174 to a96c059 Compare December 22, 2025 05:20
@ericywl ericywl requested a review from carsonip December 22, 2025 05:21
Copy link
Member

@carsonip carsonip left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

qq: should the test pass or fail here? I assume when the test passes, it means a race happened, right? If so, I think the test is correctly validating the race in its current state

@ericywl ericywl requested a review from carsonip December 23, 2025 12:44
Copy link
Member

@carsonip carsonip left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, the approach looks good now

Copy link
Member

@carsonip carsonip left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in the benchmarks do you mind also running at a higher GOMAXPROCS? e.g. -cpu=14,100 and see if it makes any difference?

@ericywl ericywl marked this pull request as ready for review December 29, 2025 03:26
@ericywl ericywl requested a review from a team as a code owner December 29, 2025 03:26
@ericywl ericywl changed the title tbs: Add test confirming the potential data race tbs: Fix potential data race Dec 29, 2025
@ericywl ericywl added the backport-active-9 Automated backport with mergify to all the active 9.[0-9]+ branches label Dec 29, 2025
@ericywl ericywl requested a review from carsonip December 29, 2025 12:41
@elasticmachine
Copy link
Contributor

💚 Build Succeeded

History

cc @ericywl

Copy link
Member

@carsonip carsonip left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm terribly sorry - Yes the PR in its existing state will address new race conditions introduced in 9.0 due to the lack of db transactions, but CMIIW theoretically there's another type of race that is inherent in the RW design, which is also in 8.x, where:

  • goroutine A ProcessBatch: IsTraceSampled(traceID) returns ErrNotFound
  • background goroutine B responsible for publishing: WriteTraceSampled(traceID, true)
  • background goroutine B responsible for publishing: ReadTraceEvents(traceID)
  • goroutine A ProcessBatch: WriteTraceEvent(traceID, event1)

In this case, event1 will be written to DB and dropped silently.

Maybe we'll have to zoom out and rethink this. Either:

  • we'll give up addressing flush time races; or
  • introduce processor level locking; or
  • implement some less expensive handling that will help us get back events fallen victim to this race; or
  • merge this PR as is to resolve this kind of race as an improvement, but create a follow up issue to address this newly identified kind of race. I don't want us to merge this PR with the impression that we've already fixed publish time race.

thoughts?

@ericywl
Copy link
Contributor Author

ericywl commented Dec 30, 2025

In that case, my previous solution (on top of ShardLockRW) should be able to catch this. The publishing will be deferred to another goroutine that waits until event1 is written.

Copy link
Member

@carsonip carsonip left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case, my previous solution (on top of ShardLockRW) should be able to catch this. The publishing will be deferred to another goroutine that waits until event1 is written.

I studied f187e90. My understanding is that it increments and decrements a per-trace-id counter before and after WriteTraceEvent in the ingest goroutine. And in the background publishing goroutine, between WriteTraceSampled and ReadTraceEvents it checks if the counter is 0. If it isn't 0, retry later.

The issue with this is it doesn't prevent the following sequence of events:

  • t1: ingest goroutine A IsTraceSampled
  • t2: background goroutine B WriteTraceSampled
  • t3: background goroutine B performs counter==0 check
  • t4: background goroutine B ReadTraceEvents
  • t4': ingest goroutine A +1 counter, WriteTraceEvent, -1 counter

Race happens and there is data loss when t4 < t4'. Therefore, in terms of correctness, f187e90 isn't race proof.

(On a side note, if the +1 counter happens before IsTraceSampled instead of WriteTraceEvent, I think it might be correct. But even in that case I'd prefer a simpler design with performance and memory implications that are easier to reason about.)

I have some ideas on how to fix it and let's take it offline. In any case we might want to have a more generic test (in addition to / replacing the existing one) that sends a lot of events at around sampling decision time. It may not be deterministic but will give us confidence on whether this class of race conditions is eliminated, without specifying the exact sequence.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backport-active-9 Automated backport with mergify to all the active 9.[0-9]+ branches

Projects

None yet

Development

Successfully merging this pull request may close these issues.

TBS: potential data loss in race condition between event arrival and receiving decision

4 participants