Skip to content

Conversation

@serichoi65
Copy link
Contributor

@serichoi65 serichoi65 commented Nov 17, 2025

Description

Operators were not earning rewards during the deallocation queue period, even though their allocations remain at risk until the effective_date.

Current behavior:

  • Operator deallocates 100 from allocation on Day 1 (effective_date = Day 15)
  • operator_allocations table immediately shows reduced allocation
  • Rewards calculation uses reduced amount → ❌ Operator loses 14 days of rewards

Expected behavior:

  • Operator should continue earning on OLD (higher) allocation until effective_date
  • After effective_date, earn on NEW (lower) allocation

Solution

Added AdjustOperatorShareSnapshotsForDeallocationQueue() function that:

  1. Reads magnitude_decrease from deallocation_queue_snapshots
  2. Aggregates across all AVSs per (operator, strategy)
  3. Adds the decrease back to operator_share_snapshots during queue period

This mirrors the existing withdrawal queue adjustment for stakers.

Changes

  • Added adjustment function in deallocationQueueShareSnapshots.go
  • Integrated into rewards calculation pipeline in rewards.go (line 684-689)
  • Automatically filters by effective_date - no manual cleanup needed

Type of change

  • New feature (non-breaking change which adds functionality)

How Has This Been Tested?

Checklist:

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged and published in downstream modules
  • I have checked my code and corrected any misspellings

@serichoi65 serichoi65 force-pushed the seri.choi/new-queue-calculation branch from 53e12db to ef95a04 Compare November 20, 2025 14:18
@serichoi65 serichoi65 changed the base branch from master to seri.choi/withdrawal-deallocation-queue November 20, 2025 14:19
@serichoi65 serichoi65 marked this pull request as ready for review November 20, 2025 14:23
@serichoi65 serichoi65 requested a review from a team as a code owner November 20, 2025 14:23
@serichoi65 serichoi65 requested review from 0xrajath and seanmcgary and removed request for a team November 20, 2025 14:23
qsw.strategy,
qsw.shares_to_withdraw as shares,
b_queued.block_time as queued_timestamp,
date(b_queued.block_time) as queued_date,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is rounding being done here (cast block_time to date)? My thinking is that the purpose of this table is to be used to keep track of shares that were in the queue and then once they are completable we insert a diff back into the staker_shares table.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is date rounding, not allocation rounding. This is converting block timestamps to dates to figure out what day a withdrawal was queued, compare with snapshot_date, and calculating +14 day withdrawable_date.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to do date rounding either. Ideally we just insert the share difference, once the withdrawal is completable, into the staker_shares table. That way, we let the logic we've already written handle the date rounding (staker_share_snapshots)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A design principle that makes sense is that we should do as little rounding as possible. For this functionality, I'd like us to ideally just insert into the staker_shares table. With that in mind, maybe we should have a parallel staker shares table just for rewards?

Copy link
Member

@seanmcgary seanmcgary Dec 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A design principle that makes sense is that we should do as little rounding as possible

+1 to this. dont round, treat this table as a time-series table like the staker shares table

I'd like us to ideally just insert into the staker_shares table.

Dont do this. The resulting values for the snapshots table should just query the staker shares and whatever other tables are needed to derive the values. Since we need to account for some kind of balances based on an effective timestamp, theres no clean way to trigger an insert for that, so querying it and deriving it is much cleaner and doesnt pollute the staker_shares table which is entirely sourced from on-chain events.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. We should just join a parallel withdrawal_queue table with staker_shares and then have that processed into staker_share_snapshots.

operator,
avs,
strategy,
magnitude_decrease,
Copy link
Contributor

@ypatil12 ypatil12 Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why store the diff? Fine to just store the magnitude itself and then get the latest magnitude for the given day upon calculation of the slashable stake

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would also work to store prev_magnitude, but I think it's easier to track the diff for multiple pending deallocations and easier aggregation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One invariant of the core protocol is there can be at most 1 pending allocation or deallocation at a time. Unsure if we can take advantage of this. It seems like we're not using the diff in the operatorAllocationSnapshots file

@ypatil12
Copy link
Contributor

ypatil12 commented Nov 29, 2025

The withdrawals are still processed immediately instead of 14 days after. The stakerShares file also has to be edited to reflect this. One thing to think about here is that we would (presumably) want the sidecar API to return the actual shares of the user in the core protocol, which should immediately decremented on a queue. However, for rewards we do not want the shares to be immediately decremented.

@serichoi65 serichoi65 force-pushed the seri.choi/withdrawal-deallocation-queue branch 2 times, most recently from f51270b to c6ae65a Compare December 1, 2025 09:44
@serichoi65 serichoi65 force-pushed the seri.choi/new-queue-calculation branch 2 times, most recently from 11b9e55 to d06c153 Compare December 1, 2025 10:24
@serichoi65
Copy link
Contributor Author

serichoi65 commented Dec 1, 2025

@ypatil12 This is the current architecture:

Blockchain Event (SlashingWithdrawalQueued)
  ↓
eigenState: Create negative delta (protocol truth)
  ↓
staker_share_deltas table (single source of truth)
  ↓
Rewards: Read deltas + apply queue adjustment
  • eigenState should always reflect protocol state
    • When SlashingWithdrawalQueued fires, shares ARE immediately decremented on-chain
    • eigenState indexer mirrors this exactly - that's its job
    • Changing this breaks the "state mirror" principle
  • Single source of truth
    • All downstream consumers read from staker_share_deltas
    • Rewards calculator applies its domain logic on top
    • Clean data flow: raw state → business logic

That's why it would be better not to edit stakerShares but having additional deltas table to correctly calculate rewards.

@serichoi65 serichoi65 force-pushed the seri.choi/new-queue-calculation branch from d06c153 to 6d252e7 Compare December 1, 2025 10:38
@serichoi65 serichoi65 force-pushed the seri.choi/withdrawal-deallocation-queue branch 2 times, most recently from bfc21ac to 4d8299b Compare December 1, 2025 17:08
@serichoi65 serichoi65 force-pushed the seri.choi/new-queue-calculation branch from 6d252e7 to ef0aba3 Compare December 1, 2025 22:11
Copy link
Contributor

@ypatil12 ypatil12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be helpful to have sequence diagrams on how events are processed, including the propogtaion of an event through each file, for allocation/deallocation and withdrawals

qsw.strategy,
qsw.shares_to_withdraw as shares,
b_queued.block_time as queued_timestamp,
date(b_queued.block_time) as queued_date,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to do date rounding either. Ideally we just insert the share difference, once the withdrawal is completable, into the staker_shares table. That way, we let the logic we've already written handle the date rounding (staker_share_snapshots)


func (r *RewardsCalculator) AdjustStakerShareSnapshotsForWithdrawalQueue(snapshotDate string) error {
adjustQuery := `
insert into staker_share_snapshots(staker, strategy, shares, snapshot)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if we can actually do this. My hunch is that we can run into issues with the snapshotting that's already been done and overwrite state. For example, say the staker shares table looked like:

Day 0: Alice queues a withdrawal for 30 shares.

  1. Day 1: Alice has 35 shares
  2. Day 2: Alice has 40 shares (Deposits 5 extra)

Then on Day 2, we also apply a queued withdrawal decrementing her shares down to 10. How do we handle this diff appropriately?

Furthermore, staker_shares keeps track of a running sum of shares and am not sure this handles the shares that were previously credit to the staker.

qsw.strategy,
qsw.shares_to_withdraw as shares,
b_queued.block_time as queued_timestamp,
date(b_queued.block_time) as queued_date,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A design principle that makes sense is that we should do as little rounding as possible. For this functionality, I'd like us to ideally just insert into the staker_shares table. With that in mind, maybe we should have a parallel staker shares table just for rewards?

@serichoi65 serichoi65 force-pushed the seri.choi/new-queue-calculation branch from ef0aba3 to 9faf22a Compare December 2, 2025 15:31
Copy link
Contributor

@ypatil12 ypatil12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it might be worth writing down a diagram or flow on how the staker shares propagate based on a withdrawal and the tables at each step of the process

@serichoi65 serichoi65 force-pushed the seri.choi/withdrawal-deallocation-queue branch from 41d0383 to 691afe7 Compare December 3, 2025 06:39
ROW_NUMBER() OVER (PARTITION BY operator, avs, strategy, operator_set_id, cast(block_time AS DATE) ORDER BY block_time DESC, log_index DESC) AS rn
FROM operator_allocations oa
INNER JOIN blocks b ON oa.block_number = b.number
INNER JOIN blocks b ON oa.effective_block = b.number
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How are you dealing with backwards compatibility?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

COALESCE(oa.effective_block, oa.block_number) handles old/new records now!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file didn't exist before, so there shouldn't be a backwards compatibility issue?

-- First allocation: round down to current day (conservative default)
date_trunc('day', block_time)
-- First allocation: round up to next day
date_trunc('day', block_time) + INTERVAL '1' day
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How are you dealing with backwards compatibility?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this an issue if operatorAllocationSnapshots didn't exist before?

@serichoi65 serichoi65 force-pushed the seri.choi/new-queue-calculation branch from 4ddeab6 to 89ab4b5 Compare December 4, 2025 22:43
@ypatil12
Copy link
Contributor

ypatil12 commented Dec 6, 2025

My main point on the above design is that from a design perspective, we should strive to insert share increments/decrements into base bronze tables and then have the snapshot table process these share changes. It's cleaner than inserting a snapshotted record into an already snapshotted table... unless im missing something:

So... you have withdrawal_queue_shares which is joined with staker_shares, and that new table is eventually processed into staker_share_snapshots

@serichoi65 serichoi65 force-pushed the seri.choi/withdrawal-deallocation-queue branch 2 times, most recently from 3174111 to 9f3db12 Compare December 7, 2025 13:24
Copy link
Contributor

@ypatil12 ypatil12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Withdrawal review

strategy,
shares,
block_time,
date_trunc('day', block_time) + INTERVAL '1' day AS snapshot_time
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should join the withdrawal_queue_adjustments here with staker_shares, and then have the snapshot calculation run

`alter table queued_slashing_withdrawals add constraint fk_completion_block foreign key (completion_block_number) references blocks(number) on delete set null`,
// Note: Withdrawal queue logic uses withdrawable_date to determine when
// shares should stop earning rewards. The withdrawable_date is calculated as
// queued_date + 14 days. No additional columns needed in queued_slashing_withdrawals.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we also need to do properly be backwards compatible with the queued_slashing_withdrawals table?

-- First allocation: round down to current day (conservative default)
date_trunc('day', block_time)
-- First allocation: round up to next day
date_trunc('day', block_time) + INTERVAL '1' day
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this an issue if operatorAllocationSnapshots didn't exist before?

@serichoi65 serichoi65 force-pushed the seri.choi/new-queue-calculation branch 6 times, most recently from ca53477 to 472ee17 Compare December 10, 2025 15:31
Copy link
Contributor

@ypatil12 ypatil12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accumulating the wads is smart! Brought up some minor cases, but generally we should be good to start really unit testing the stakerShareSnapshots.go file

WHERE qsw.operator = @operator
AND qsw.strategy = @strategy
-- Withdrawal was queued before this slash
AND qsw.block_number < @slashBlockNumber
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should check block number and log index. Since, a slash can happen right after withdrawal was queued

SELECT number FROM blocks WHERE block_time <= TIMESTAMP '{{.cutoffDate}}' ORDER BY number DESC LIMIT 1
)
ORDER BY adj.slash_block_number DESC
LIMIT 1),
Copy link
Contributor

@ypatil12 ypatil12 Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing we also need to properly handle is beacon chain slashes. Native ETH (eigenpod) shares can be slashed by both the beacon chain and an AVS. We can handle this after we get more traction on ensuring the other parts are correct, but want to call this out. In the shares.go eigenstate we do:

	wadSlashed := big.NewInt(1e18)
	wadSlashed = wadSlashed.Mul(wadSlashed, new(big.Int).SetUint64(outputData.NewBeaconChainSlashingFactor))
	wadSlashed = wadSlashed.Div(wadSlashed, new(big.Int).SetUint64(outputData.PrevBeaconChainSlashingFactor))
	wadSlashed = wadSlashed.Sub(big.NewInt(1e18), wadSlashed)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will write down an equation for this tomorrow.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this case is already handled since beacon chain slashed propagate through decrements in stakerShares. What we need to ensure is that these types of slashes are not "double-decremented".

We handles avs slashes through the pathway of OperatorSlashed event, which should not affect the queued_slashing_withdrawals table. Is there anywhere where we blankly assume a decrease in staker shares is a queued withdrawal?

Here is an integration test we should eventually have:

T1: Alice has 200 shares
T2: Alice queues a withdrawal for 50 shares
- EigenState: Alice has 150 shares
- RewardsState: Alice has 200 shares
T3: Bob (Alice’s operator) is slashed for 25% for the BeaconChain Strategy.
- EigenState: -37.5 shares for Alice. Alice has 112.5 Shares
- RewardsState: -50 shares for Alice. Alice has 150 shares
T4: Alice is slashed for 50% for on the BeaconChain
- EigenState: -56.25 Shares for Alice. Alice has 56.25 shares
- RewardsState: -75 shares for Alice. Alice has 75 shares
T15: Alice’s withdrawal is completable
- EigenState: 56.25 shares
- RewardsState: 56.25 shares

@serichoi65 serichoi65 force-pushed the seri.choi/new-queue-calculation branch from a022ccb to 992f33b Compare December 12, 2025 16:05
@serichoi65 serichoi65 force-pushed the seri.choi/withdrawal-deallocation-queue branch from 9f3db12 to 8364d9f Compare December 15, 2025 00:10
Base automatically changed from seri.choi/withdrawal-deallocation-queue to master December 15, 2025 00:15
@serichoi65 serichoi65 force-pushed the seri.choi/new-queue-calculation branch from 992f33b to e4874e1 Compare December 15, 2025 00:16
Copy link
Contributor

@ypatil12 ypatil12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments. If we need to close out this PR, wdyt about resolving the the non-testing comments and then merge this into a release branch (not master). Then, we can start a big testing push across all files.

otherwise, probably best to keep this PR open and just start adding tests separately. I can start testing tmrw

OR (qsw.block_number = @slashBlockNumber AND qsw.log_index < @logIndex)
)
-- Still within 14-day window (not yet completable)
AND DATE(b_queued.block_time) + INTERVAL '14 days' > (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The withdrawal queue window is 10 minutes on preprod/testnet but 14 days on mainnet. We'll have to think about how we should handle this on preprod/testnet environments

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now, should be good to merge this PR in, but I wrote it down as an open question to handle prior to deploying on preprod. My thinking here is to use the proper interval for each environment. However, tests (locally, integration) should assume we're on mainnet.

SELECT number FROM blocks WHERE block_time <= TIMESTAMP '{{.cutoffDate}}' ORDER BY number DESC LIMIT 1
)
ORDER BY adj.slash_block_number DESC
LIMIT 1),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this case is already handled since beacon chain slashed propagate through decrements in stakerShares. What we need to ensure is that these types of slashes are not "double-decremented".

We handles avs slashes through the pathway of OperatorSlashed event, which should not affect the queued_slashing_withdrawals table. Is there anywhere where we blankly assume a decrease in staker shares is a queued withdrawal?

Here is an integration test we should eventually have:

T1: Alice has 200 shares
T2: Alice queues a withdrawal for 50 shares
- EigenState: Alice has 150 shares
- RewardsState: Alice has 200 shares
T3: Bob (Alice’s operator) is slashed for 25% for the BeaconChain Strategy.
- EigenState: -37.5 shares for Alice. Alice has 112.5 Shares
- RewardsState: -50 shares for Alice. Alice has 150 shares
T4: Alice is slashed for 50% for on the BeaconChain
- EigenState: -56.25 Shares for Alice. Alice has 56.25 shares
- RewardsState: -75 shares for Alice. Alice has 75 shares
T15: Alice’s withdrawal is completable
- EigenState: 56.25 shares
- RewardsState: 56.25 shares

where rn = 1
),
-- Get the range for each operator, strategy pairing
-- Join with withdrawal queue at bronze level before creating windows
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're still inserting these records after the snapshotting is being done. We ideally should insert them before the snapshotting is being done. Does doing so break the tests in any way?

Ideally, we only join with diffs and then we snapshot. Right now we're taking the staker share diffs, snapshotting, and then inserting these withdrawal queue adjustments

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made a change to: Bronze diffs → Join → Snapshot windows → Expand

// T2: 50 shares (35 base + 15 queued after 50% slash)
// T3: 35 shares (queued withdrawal no longer counts)

assert.GreaterOrEqual(t, len(snapshots), 3, "Should have at least 3 unique snapshots")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should assert the actual values, not just the number of snapshots

Copy link
Contributor Author

@serichoi65 serichoi65 Dec 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated and applied to the whole test

where rn = 1
),
-- Get the range for each operator, strategy pairing
-- Join with withdrawal queue at bronze level before creating windows
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For backwards compatibility, we really should be doing if casing on the sabine fork block, like we do across the rewards calc for other hard forks

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Applied the same block-based fork checking pattern like eigenState

OR (qsw.block_number = @slashBlockNumber AND qsw.log_index < @logIndex)
)
-- Still within 14-day window (not yet completable)
AND DATE(b_queued.block_time) + INTERVAL '14 days' > (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: these conversions don't work as intended because they convert queued.block_time into a date and then add 14 days. We need to add 14 days in seconds to the block_time

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed DATE and using TIMESTAMP to have block_time in seconds

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants