-
Notifications
You must be signed in to change notification settings - Fork 6.7k
Replace resumable compaction job unit test with compaction service unit test #14191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| } | ||
|
|
||
| if (!reader_status.ok()) { | ||
| return reader_status; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To please ASSERT_STATUS_CHECK. Missing it (i.e, not entering the loop of compaction_progress_reader.ReadRecord(&slice, &record)) is not a bug because we have the blanket statement below if (progress_builder.HasAccumulatedSubcompactionProgress())
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if reader_status.PermitUncheckedError() would make more sense if the purpose of doing this is to please ASSERT_STATUS_CHECK
2e656e2 to
7006c52
Compare
| // no progress file exists, so it cannot resume and must start from | ||
| // scratch, processing all input keys again. Result: Phase 3 does the | ||
| // same amount of work as Phase 1. | ||
| EXPECT_EQ(final_phase_stats.count, phase1_stats.count); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not strictly related to this PR, but since you are improving the test.
IIUC, these stats are FILE_WRITE_COMPACTION_MICROS in RunCancelledCompaction() and RunCompaction(). Are they deterministic? I thought they were set by StopWatch. Asking for my own learning.
If they are not deterministic, I wonder if we want to use read/write bytes count instead.
Context/Summary:
compaction_job_test does low level assertions on what keys were saved and to resume in 1e5fa69 before the integration of the feature is done in a separate PR f7e4009. Such low-level test makes it difficult to assert data correctness, is hard to understand by being tied to implementation details.
Therefore they are now replaced with db-level tests with data correctness check, which is what we ultimately care out of those details. I also expand the test to cover wide column and TimedPut() which associates a key with write time.
Test:
leads to the expected error with resumption at merge, single delete and deletion at bottom