-
Notifications
You must be signed in to change notification settings - Fork 380
Adds bounded, FIFO multi-request support for delegation scheduling in pallet-parachain-staking
#3550
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
Important Review skippedAuto reviews are limited based on label configuration. 🏷️ Required labels (at least one) (1)
Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
WASM runtime size check:Compared to target branchMoonbase runtime: 1992 KB (+4 KB) Moonbeam runtime: 2108 KB (no changes) ✅ Moonriver runtime: 2108 KB (no changes) ✅ Compared to latest release (runtime-4001)Moonbase runtime: 1992 KB (+220 KB compared to latest release) Moonbeam runtime: 2108 KB (+244 KB compared to latest release) Moonriver runtime: 2108 KB (+244 KB compared to latest release) |
Coverage Report@@ Coverage Diff @@
## master elois/multiple-unbond +/- ##
=========================================================
- Coverage 74.62% 74.61% -0.01%
Files 394 394
+ Lines 95725 95986 +261
=========================================================
+ Hits 71429 71616 +187
+ Misses 24296 24370 +74
|
RomarQ
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Early review, will have another look at this later today
Moonbase Weight Difference Report
Moonriver Weight Difference Report
Moonbeam Weight Difference Report
|
Replaced the two-pass logic over scheduled_requests (one .any() for revokes, one loop to sum decreases) with a single-pass fold.
What does it do?
parachain-staking:(collator, delegator)can now hold up to 50 pending bond-less/revoke requests, executed strictly in order.WeightInfoplumbing so weights remain accurate for the new storage layout.What important points should reviewers know?
MaxTopDelegationsPerCandidate + MaxBottomDelegationsPerCandidatedelegators with pending requests per collator, enforced viaDelegationScheduledRequestsPerCollator.DelegationScheduledRequestsPerCollator[c]always equals the number of delegators with non-empty queues for collatorc.(collator, delegator)double map, initializing the per-collator counters consistently.UnreleasedSingleBlockMigrationsformoonbase/moonbeam/moonriver.What alternative implementations were considered?
BoundedVecand embedding(delegator, collator)in each request:iter_prefix, but rejected due to unbounded storage proof size in PoV and higher on-chain weight.BoundedVecplus simple per-collator counter.What value does it bring to the blockchain users?
What's solved in this change and what features are modified?
Brief summary of issue that should be resolved
parachain-stakingpallet previously allowed only one pending delegation request per (collator, delegator), stored in a singleBoundedVeckeyed by collator.High-level summary of feature changes or specifications of new feature
Multiple pending requests per delegation (FIFO):
DelegationScheduledRequestsis refactored into a double map:(collator, delegator).BoundedVec(capacity 50) ofScheduledRequests.(collator, delegator)can now have up to 50 scheduled requests, executed strictly in FIFO order byexecute_delegation_request.Revoke semantics:
Revokecan only be scheduled when there is no other scheduled request for that(collator, delegator).Global per-collator bound on delegators with pending requests:
MaxTopDelegationsPerCandidate + MaxBottomDelegationsPerCandidatedistinct delegators with pending requests.Reward behavior with multiple pending requests:
Revokeexists, that delegation is treated as fully revoked for reward purposes.Decreaseamounts for that delegation are summed into a single effective decrease.Benchmarks and weights kept in sync with new storage and accounting:
schedule_revoke_delegation,schedule_delegator_bond_less,cancel_delegation_request, andexecute_delegator_revoke_delegation_worstare updated to:delegation_scheduled_requests(&collator, &delegator).pay_one_collator_reward_bestweight path was simplified so its weight depends only on:(x, y)parameters used in production, and all runtime weight files were regenerated accordingly to stay consistent with this shape.What changes to storage structures, processes or high-level assumptions have been made?
High-level summary of modified assumptions
Before:
(collator, delegator); storage keyed only by collator.BoundedVec).After:
(collator, delegator)may have up to 50 pending requests, but:MaxTop + MaxBottom.CDelegationScheduledRequestsPerCollator[C] == number of delegators D such that DelegationScheduledRequests[(C, D)] is non-empty.Revokedominates; otherwise the decreases are aggregated.Low-level summary of process / storage changes
Storage changes:
DelegationScheduledRequests<T>:StorageMap<collator, BoundedVec<ScheduledRequest<...>>>.StorageDoubleMap<(collator, delegator) -> BoundedVec<ScheduledRequest<...>, ConstU32<50>>>.DelegationScheduledRequestsPerCollator<T>:StorageMap<collator, u32>, counting how many delegators have at least one pending request for that collator.Scheduling revoke / bond-less:
(collator, delegator)is new (no entry inDelegationScheduledRequests).DelegationScheduledRequestsPerCollator[collator].max_delegators_per_candidate().ExceedMaxDelegationsPerDelegator.(collator, delegator):DelegationScheduledRequestsPerCollator[collator].Cancel / execute / removal:
DelegationScheduledRequests[(collator, delegator)].DelegationScheduledRequestsPerCollator[collator].delegation_cancel_request.delegation_execute_scheduled_request(both revoke and decrease branches).delegation_remove_request_with_state(used on kicks and candidate exits).Candidate leave cleanup:
execute_leave_candidates_innernow:DelegationScheduledRequests::clear_prefix(candidate, max_delegators_per_candidate(), None).DelegationScheduledRequestsPerCollator[candidate].Are there additional mechanisms or storage structures indirectly affected by these changes?
Indirectly affected components
Reward mechanics:
get_rewardable_delegators) now:(collator, delegator)queues.Auto-compounding metadata:
delegation_remove_request_with_stateand remove auto-compound entries.DelegationScheduledRequestsPerCollatorwhen the last pending request for a delegator is removed.Known side-effects (not directly visible in the diff)
less_totalis accounted, but in a way consistent with the new FIFO queue semantics.What risks have already been internally considered or handled?
Internal concerns and how they are addressed
Risk: Counter inconsistency.
DelegationScheduledRequestsPerCollatorcould become inconsistent with the actual queues if some paths skip updates.(collator, delegator)pair:cannot_exceed_max_delegators_with_pending_requests_per_collator) saturates the per-collator state to the maximum and checks that the next schedule fails withExceedMaxDelegationsPerDelegator.Risk: Behavioral changes with multiple requests and revokes.
Multiple pending requests per delegation could break assumptions in existing logic (e.g.
less_total, reward calculation, exit checks).Risk: PoV / weight regression.
Adding new storage reads/writes might impact PoV size or violate prior weight assumptions.
DelegationScheduledRequests) was eliminated; the new logic is strictly bounded and constant-time.