-
Notifications
You must be signed in to change notification settings - Fork 16
Move snapshot data write to async mode. #299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #299 +/- ##
==========================================
- Coverage 63.15% 59.73% -3.42%
==========================================
Files 32 33 +1
Lines 1900 3055 +1155
Branches 204 364 +160
==========================================
+ Hits 1200 1825 +625
- Misses 600 1034 +434
- Partials 100 196 +96 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
err ? err.message() : "nil"); | ||
} | ||
} | ||
std::unique_lock< std::shared_mutex > lock(mutex); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems should be
std::unique_lock< std::shared_mutex > lock(ctx_->progress_lock);
If a blob batch returns an error due to ALLOC_BLK_ERR, the async_write might still be in progress. |
In order to provider higher queue depth for disk, which is optimal for scheduler to merge requests. Also IO and compute(checksum) can parallel. Signed-off-by: Xiaoxi Chen <[email protected]>
Signed-off-by: Xiaoxi Chen <[email protected]>
In order to provider higher queue depth for disk, which is optimal for scheduler to merge requests.
Also IO and compute(checksum) can parallel.