Skip to content

Conversation

@librelois
Copy link
Collaborator

@librelois librelois commented Nov 19, 2025

What does it do?

  • Adds bounded, FIFO multi-request support for delegation scheduling in parachain-staking:
    • Each (collator, delegator) can now hold up to 50 pending bond-less/revoke requests, executed strictly in order.
    • Introduces a hard per-collator bound on how many delegators may have pending requests and tracks this with a dedicated counter map.
    • Adapts benchmarks and WeightInfo plumbing so weights remain accurate for the new storage layout.
    • Includes a migration to move old single-map scheduled requests into the new double-map + counter structure.

What important points should reviewers know?

  • Externally compatible: no call names or parameters change; all behavior changes are internal to the pallet and its migration.
  • New invariants:
    • At most 50 requests per (collator, delegator).
    • At most MaxTopDelegationsPerCandidate + MaxBottomDelegationsPerCandidate delegators with pending requests per collator, enforced via DelegationScheduledRequestsPerCollator.
    • DelegationScheduledRequestsPerCollator[c] always equals the number of delegators with non-empty queues for collator c.
  • Revoke exclusivity:
    • A revoke can only be scheduled when no other request is pending for that delegation, and once scheduled it blocks further requests.
    • Conversely, if any bond-less request is pending, a revoke cannot be scheduled for that delegation.
  • PoV-conscious design and weight alignment: replaces unbounded prefix-iteration checks with O(1) counter reads/writes, and updates the staking benchmarks so regenerated weights include the cost of the new storage accesses and internal accounting simplifications.
  • Migration safety: the migration:
    • Reads the legacy single-map scheduled requests.
    • Rewrites them into the new (collator, delegator) double map, initializing the per-collator counters consistently.
    • Cleans up the old storage, and is wired into UnreleasedSingleBlockMigrations for moonbase/moonbeam/moonriver.

What alternative implementations were considered?

  • Keeping a single map with a larger BoundedVec and embedding (delegator, collator) in each request:
    • Rejected because it would keep the per-collator PoV footprint large and make per-delegation reasoning and revocation logic more complex.
  • Iterating storage on each schedule to count delegators with pending requests:
    • Initially implemented using iter_prefix, but rejected due to unbounded storage proof size in PoV and higher on-chain weight.
  • Encoding multiple requests in a custom linked-list-like structure per delegator:
    • Considered but rejected as it would significantly complicate the implementation, for limited PoV gain compared to a bounded BoundedVec plus simple per-collator counter.

What value does it bring to the blockchain users?

  • More flexible delegation management:
    • Delegators can stage multiple bond-less operations over time and have them executed in a predictable FIFO manner, matching real-world unstaking workflows.

What's solved in this change and what features are modified?

Brief summary of issue that should be resolved

  • The parachain-staking pallet previously allowed only one pending delegation request per (collator, delegator), stored in a single BoundedVec keyed by collator.
  • We need to support multiple unbond (bond-less / revoke) requests per delegator per collator, while:
    • Preserving the external pallet API (no call name/parameter changes).
    • Enforcing clear bounds on:
      • The number of requests per delegation, and
      • The number of delegators with pending requests per collator.

High-level summary of feature changes or specifications of new feature

  • Multiple pending requests per delegation (FIFO):

    • DelegationScheduledRequests is refactored into a double map:
      • Key: (collator, delegator).
      • Value: a BoundedVec (capacity 50) of ScheduledRequests.
    • Each (collator, delegator) can now have up to 50 scheduled requests, executed strictly in FIFO order by execute_delegation_request.
  • Revoke semantics:

    • A Revoke can only be scheduled when there is no other scheduled request for that (collator, delegator).
    • While a revoke exists, no additional requests (e.g. bond-less) may be scheduled for that delegation.
  • Global per-collator bound on delegators with pending requests:

    • A collator may have at most
      MaxTopDelegationsPerCandidate + MaxBottomDelegationsPerCandidate distinct delegators with pending requests.
  • Reward behavior with multiple pending requests:

    • Reward calculation now aggregates all requests per delegator:
      • If any Revoke exists, that delegation is treated as fully revoked for reward purposes.
      • Otherwise, all Decrease amounts for that delegation are summed into a single effective decrease.
  • Benchmarks and weights kept in sync with new storage and accounting:

    • Benchmarks for schedule_revoke_delegation, schedule_delegator_bond_less, cancel_delegation_request, and execute_delegator_revoke_delegation_worst are updated to:
      • Use the new double-map getter delegation_scheduled_requests(&collator, &delegator).
      • Exercise the worst-case paths where the per-collator counter is read and written (first/last request for a delegator).
    • The pay_one_collator_reward_best weight path was simplified so its weight depends only on:
      • The number of delegations actually paid for a collator.
      • The number of those delegations that use auto-compounding.
    • The corresponding benchmark no longer needs to synthesize scheduled-requests state; it directly drives the weight function with the (x, y) parameters used in production, and all runtime weight files were regenerated accordingly to stay consistent with this shape.

What changes to storage structures, processes or high-level assumptions have been made?

High-level summary of modified assumptions

  • Before:

    • At most one scheduled request per (collator, delegator); storage keyed only by collator.
    • No explicit, separate bound on the number of delegators with pending requests per collator (implied only by the size of a single BoundedVec).
    • Reward logic assumed a single request per delegator and mapped it 1:1 into a per-delegator change.
  • After:

    • A (collator, delegator) may have up to 50 pending requests, but:
      • The number of delegators with non-empty queues per collator is explicitly bounded to MaxTop + MaxBottom.
    • New invariant: for any collator C
      DelegationScheduledRequestsPerCollator[C] == number of delegators D such that DelegationScheduledRequests[(C, D)] is non-empty.
    • Reward logic now treats a sequence of requests per delegator as a single effective change:
      • Revoke dominates; otherwise the decreases are aggregated.

Low-level summary of process / storage changes

  • Storage changes:

    • DelegationScheduledRequests<T>:
      • Old: StorageMap<collator, BoundedVec<ScheduledRequest<...>>>.
      • New: StorageDoubleMap<(collator, delegator) -> BoundedVec<ScheduledRequest<...>, ConstU32<50>>>.
    • New DelegationScheduledRequestsPerCollator<T>:
      • StorageMap<collator, u32>, counting how many delegators have at least one pending request for that collator.
  • Scheduling revoke / bond-less:

    • On schedule:
      • Detect if (collator, delegator) is new (no entry in DelegationScheduledRequests).
      • If new:
        • Read DelegationScheduledRequestsPerCollator[collator].
        • Compare to max_delegators_per_candidate().
        • If at bound, return ExceedMaxDelegationsPerDelegator.
      • After successfully pushing the first request for a new (collator, delegator):
        • Increment DelegationScheduledRequestsPerCollator[collator].
  • Cancel / execute / removal:

    • When a delegator’s queue for a collator becomes empty:
      • Remove DelegationScheduledRequests[(collator, delegator)].
      • Decrement DelegationScheduledRequestsPerCollator[collator].
    • This is wired into:
      • delegation_cancel_request.
      • delegation_execute_scheduled_request (both revoke and decrease branches).
      • delegation_remove_request_with_state (used on kicks and candidate exits).
  • Candidate leave cleanup:

    • execute_leave_candidates_inner now:
      • Clears all scheduled requests for that candidate using
        DelegationScheduledRequests::clear_prefix(candidate, max_delegators_per_candidate(), None).
      • Removes DelegationScheduledRequestsPerCollator[candidate].
    • This ensures we don’t leave stale per-collator counters or pending queues when a candidate/collator leaves.

Are there additional mechanisms or storage structures indirectly affected by these changes?

Indirectly affected components

  • Reward mechanics:

    • Reward snapshot computation (get_rewardable_delegators) now:
      • Iterates over all (collator, delegator) queues.
      • Aggregates multiple requests per delegator into a single effective change.
    • This keeps reward distribution aligned with delegator intent even when there are multiple pending requests.
  • Auto-compounding metadata:

    • Existing paths that remove delegations (revokes, kicks, candidate leave) already call into delegation_remove_request_with_state and remove auto-compound entries.
    • These paths now also correctly reduce DelegationScheduledRequestsPerCollator when the last pending request for a delegator is removed.

Known side-effects (not directly visible in the diff)

  • Delegators can now stack multiple pending decreases (or a revoke plus historic decreases) for a delegation. This slightly changes the timing/shape of how less_total is accounted, but in a way consistent with the new FIFO queue semantics.

What risks have already been internally considered or handled?

Internal concerns and how they are addressed

  • Risk: Counter inconsistency.
    DelegationScheduledRequestsPerCollator could become inconsistent with the actual queues if some paths skip updates.

    • Mitigation:
      • Every path that can create or destroy the first/last request for a (collator, delegator) pair:
        • Schedules: increments the counter only when the first request is added.
        • Cancels / executes / removes: decrements the counter only when the queue becomes empty.
      • A dedicated test (cannot_exceed_max_delegators_with_pending_requests_per_collator) saturates the per-collator state to the maximum and checks that the next schedule fails with ExceedMaxDelegationsPerDelegator.
  • Risk: Behavioral changes with multiple requests and revokes.
    Multiple pending requests per delegation could break assumptions in existing logic (e.g. less_total, reward calculation, exit checks).

    • Mitigation:
      • New tests cover:
        • Multiple bond-less requests per delegation and their FIFO execution.
        • Revoke blocking new requests once scheduled.
        • Capacity limits (50 requests) per delegation.
      • Existing tests around revokes, delegation exits, and reward distribution continue to pass unchanged, validating compatibility at behavior level.
  • Risk: PoV / weight regression.
    Adding new storage reads/writes might impact PoV size or violate prior weight assumptions.

    • Mitigation:
      • The most problematic part (prefix-iteration over DelegationScheduledRequests) was eliminated; the new logic is strictly bounded and constant-time.
      • The added read/write is minor relative to the previous worst-case and within the existing upper bounds used in weight functions.
      • Full pallet test suite passes, including weight-related tests; any future re-benchmarking can treat the new storage accesses explicitly.

@librelois librelois added B7-runtimenoteworthy Changes should be noted in any runtime-upgrade release notes D1-runtime-migration PR introduces code that might require downstream chains to run a runtime upgrade. D9-needsaudit👮 PR contains changes to fund-managing logic that should be properly reviewed and externally audited labels Nov 19, 2025
@coderabbitai
Copy link

coderabbitai bot commented Nov 19, 2025

Important

Review skipped

Auto reviews are limited based on label configuration.

🏷️ Required labels (at least one) (1)
  • agent-review

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch elois/multiple-unbond

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Contributor

github-actions bot commented Nov 19, 2025

WASM runtime size check:

Compared to target branch

Moonbase runtime: 1992 KB (+4 KB) ⚠️

Moonbeam runtime: 2108 KB (no changes) ✅

Moonriver runtime: 2108 KB (no changes) ✅

Compared to latest release (runtime-4001)

Moonbase runtime: 1992 KB (+220 KB compared to latest release) ⚠️

Moonbeam runtime: 2108 KB (+244 KB compared to latest release) ⚠️

Moonriver runtime: 2108 KB (+244 KB compared to latest release) ⚠️

@github-actions
Copy link
Contributor

github-actions bot commented Nov 19, 2025

Coverage Report

@@                    Coverage Diff                    @@
##           master   elois/multiple-unbond      +/-   ##
=========================================================
- Coverage   74.62%                  74.61%   -0.01%     
  Files         394                     394              
+ Lines       95725                   95986     +261     
=========================================================
+ Hits        71429                   71616     +187     
+ Misses      24296                   24370      +74     
Files Changed Coverage
/pallets/parachain-staking/src/delegation_requests.rs 89.12% (-1.94%) 🔽
/pallets/parachain-staking/src/lib.rs 93.26% (+0.09%) 🔼
/pallets/parachain-staking/src/tests.rs 99.18% (-0.09%) 🔽
/pallets/parachain-staking/src/types.rs 85.78% (-1.77%) 🔽
/pallets/parachain-staking/src/weights.rs 42.89% (-0.02%) 🔽
/runtime/moonbase/src/weights/pallet_parachain_staking.rs 23.79% (-0.18%) 🔽
/runtime/moonbeam/src/weights/pallet_parachain_staking.rs 23.79% (-0.18%) 🔽
/runtime/moonriver/src/weights/pallet_parachain_staking.rs 23.79% (-0.18%) 🔽

Coverage generated Mon Nov 24 16:30:26 UTC 2025

@librelois librelois marked this pull request as ready for review November 19, 2025 19:34
Copy link
Contributor

@RomarQ RomarQ left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Early review, will have another look at this later today

@github-actions
Copy link
Contributor

Moonbase Weight Difference Report

File Extrinsic Old New Change Percent

Moonriver Weight Difference Report

File Extrinsic Old New Change Percent

Moonbeam Weight Difference Report

File Extrinsic Old New Change Percent

Replaced the two-pass logic over scheduled_requests (one .any() for revokes, one loop to sum decreases) with a single-pass fold.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

B7-runtimenoteworthy Changes should be noted in any runtime-upgrade release notes D1-runtime-migration PR introduces code that might require downstream chains to run a runtime upgrade. D9-needsaudit👮 PR contains changes to fund-managing logic that should be properly reviewed and externally audited not-breaking Does not need to be mentioned in breaking changes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants