-
Notifications
You must be signed in to change notification settings - Fork 10.3k
etcdserver: prevent panic when two snapshots arrive in quick succession #21082
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: PhantomInTheWire The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Hi @PhantomInTheWire. Thanks for your PR. I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/ok-to-test Can you provide description why do you think this PR addresses the issue? The #18055 (comment) describes different solution based on file locking, not file locking. Not saying it's wrong, but it would be good to have some overview of how you assume snapshotter works, and how this problem occurs, and why your approach solves it. If my understanding is correct, @ahrtr proposed to use file locking to prevent deletion of snapshot because there is no direct communication between snapshotter and apply loop using snapshots and mechanism that cleanups snapshots. The only way they communicate is via filesystem, like locking file. |
|
/retest |
Codecov Report❌ Patch coverage is
Additional details and impacted files
... and 24 files with indirect coverage changes @@ Coverage Diff @@
## main #21082 +/- ##
==========================================
+ Coverage 68.39% 68.59% +0.19%
==========================================
Files 429 429
Lines 35281 35303 +22
==========================================
+ Hits 24132 24217 +85
+ Misses 9742 9693 -49
+ Partials 1407 1393 -14 Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
8b70577 to
e8348c4
Compare
|
hey @serathius ive updated the pr desc with you what you asked, and why i did not use file locking as suggested by @ahrtr |
|
@ahrtr can you take a look as author of the proposal? #18055 (comment) |
Signed-off-by: Karan <[email protected]>
Signed-off-by: Karan <[email protected]>
e8348c4 to
008aeb2
Compare
|
rebased to main. |
fixes: #18055
Approach
OpenSnapshotBackend now calls
ReserveDBSnapshotto mark the snapshot index as "in-use" before attempting to access the file. Concurrent calls toReleaseSnapDBs(triggered by incoming newer snapshots) check this reservation map and explicitly skip deletion for any reserved indices. This guarantees that the snapshot file currently being applied is protected from deletion, even if a newer snapshot arrives and triggers a cleanup during the apply process.This PR implements an in-memory reservation mechanism rather than file locking because:
Both the apply path OpenSnapshotBackend and the cleanup path ReleaseSnapDBs operate on the same
*Snapshotterinstance, allowing in-memory coordination.File locking (
flock) behaves differently across platforms (Linux, Windows) and filesystems (NFS), while in-memory coordination is consistent everywhere. (contributing.mdmentions only Linux is supported, but the Makefile does include Windows builds.)OpenSnapshotBackend renames the snapshot file; it is my understanding that file locks do not survive renames on many systems.
The reservation is a simple map lookup protected by
RWMutex, withReserve/Releaseco-located in one function usingdefer.The fix adds a
reservedmap to track snapshots currently being applied, and ReleaseSnapDBs skips deletion of any reserved snapshots.