-
Notifications
You must be signed in to change notification settings - Fork 541
tbs: Fix potential data race #19948
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
tbs: Fix potential data race #19948
Conversation
🤖 GitHub commentsJust comment with:
|
|
This pull request does not have a backport label. Could you fix it @ericywl? 🙏
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. The setup looks about right, but the specific case that needs to be tested is not what I'm thinking about.
In this PR, there are txn1 and txn2 both part of trace1, which don't make sense to me. 1 trace should only have 1 root transaction. In your current setup, what should happen to txn2 is undefined.
To be clearer, what I'd like to test is similar: txn1 is root txn of trace1, and txn2 is a child of txn1.
At time t1: apm server receives txn1
t2: background sampling goroutine: apm server makes sampling decision for txn1
t2': apm server receives txn2
t3: background sampling goroutine: marks trace1 as sampled
^ the above is a race, because apm server receives txn2 between t2 and t3, and the result is txn2 is lost forever. If it happens either before t2 or after t3, txn2 is exported correctly.
It gets a bit theoretical but I believe it is possible. Lmk if you have any questions.
1e2035e to
8f27d8c
Compare
675d174 to
a96c059
Compare
carsonip
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
qq: should the test pass or fail here? I assume when the test passes, it means a race happened, right? If so, I think the test is correctly validating the race in its current state
carsonip
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks, the approach looks good now
carsonip
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in the benchmarks do you mind also running at a higher GOMAXPROCS? e.g. -cpu=14,100 and see if it makes any difference?
💚 Build Succeeded
History
cc @ericywl |
carsonip
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm terribly sorry - Yes the PR in its existing state will address new race conditions introduced in 9.0 due to the lack of db transactions, but CMIIW theoretically there's another type of race that is inherent in the RW design, which is also in 8.x, where:
- goroutine A ProcessBatch: IsTraceSampled(traceID) returns ErrNotFound
- background goroutine B responsible for publishing: WriteTraceSampled(traceID, true)
- background goroutine B responsible for publishing: ReadTraceEvents(traceID)
- goroutine A ProcessBatch: WriteTraceEvent(traceID, event1)
In this case, event1 will be written to DB and dropped silently.
Maybe we'll have to zoom out and rethink this. Either:
- we'll give up addressing flush time races; or
- introduce processor level locking; or
- implement some less expensive handling that will help us get back events fallen victim to this race; or
- merge this PR as is to resolve this kind of race as an improvement, but create a follow up issue to address this newly identified kind of race. I don't want us to merge this PR with the impression that we've already fixed publish time race.
thoughts?
|
In that case, my previous solution (on top of ShardLockRW) should be able to catch this. The publishing will be deferred to another goroutine that waits until |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case, my previous solution (on top of ShardLockRW) should be able to catch this. The publishing will be deferred to another goroutine that waits until event1 is written.
I studied f187e90. My understanding is that it increments and decrements a per-trace-id counter before and after WriteTraceEvent in the ingest goroutine. And in the background publishing goroutine, between WriteTraceSampled and ReadTraceEvents it checks if the counter is 0. If it isn't 0, retry later.
The issue with this is it doesn't prevent the following sequence of events:
- t1: ingest goroutine A IsTraceSampled
- t2: background goroutine B WriteTraceSampled
- t3: background goroutine B performs counter==0 check
- t4: background goroutine B ReadTraceEvents
- t4': ingest goroutine A +1 counter, WriteTraceEvent, -1 counter
Race happens and there is data loss when t4 < t4'. Therefore, in terms of correctness, f187e90 isn't race proof.
(On a side note, if the +1 counter happens before IsTraceSampled instead of WriteTraceEvent, I think it might be correct. But even in that case I'd prefer a simpler design with performance and memory implications that are easier to reason about.)
I have some ideas on how to fix it and let's take it offline. In any case we might want to have a more generic test (in addition to / replacing the existing one) that sends a lot of events at around sampling decision time. It may not be deterministic but will give us confidence on whether this class of race conditions is eliminated, without specifying the exact sequence.
Summary
Fix potential data race between
WriteTraceEventinProcessBatchandReadTraceEventin the sampling goroutine. Closes #17772.Performance
Baseline
Single Mutex
ShardLockReadWriter
ShardLockReadWriter with RWMutex