-
Notifications
You must be signed in to change notification settings - Fork 17
feat: optionally reclaim expired account rent from operator #172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| Ok((tda_accounts, pfda_accounts)) | ||
| } | ||
|
|
||
| async fn fetch_expired_claim_statuses( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has an average of 30-60 seconds latency vs 700s+ for the existing approach for fetching claims by epoch (within the claims process) using get batched accounts, not counting sourcing the pubkeys from the very large merkle tree collection.
| #[allow(clippy::integer_division)] | ||
| #[allow(clippy::arithmetic_side_effects)] | ||
| #[allow(clippy::manual_div_ceil)] | ||
| pub fn pack_transactions( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This helps us achieve 200+ closed accounts per second on a single instance. We should re-use this where applicable. The claims process would be a great candidate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Claims can't be packed well just because the merkle proof is very long but might be able to get some improvements!
| // Use default timeout and commitment config for fetching the current epoch | ||
| let rpc_client = rpc_utils::new_rpc_client(rpc_url); | ||
| let current_epoch = rpc_client.get_epoch_info().await?.epoch; | ||
| (current_epoch - num_monitored_epochs)..current_epoch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this be offset by the number of epochs the claim status are active for? what are you using for num_monitored_epochs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're filtering by the expires_at epoch in our getProgramAccounts calls so this should be correct. num_monitored_epochs will be the same as the CLI param.
The operator will claim and reclaim rent for claims that should be claimed in a monitored epoch or expire in a monitored epoch respectively.
| // Use default timeout and commitment config for fetching the current epoch | ||
| let rpc_client = rpc_utils::new_rpc_client(rpc_url); | ||
| let current_epoch = rpc_client.get_epoch_info().await?.epoch; | ||
| (current_epoch - num_monitored_epochs)..current_epoch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto
|
Noticed the tip claim status filter was off by one byte. Addressed in latest commit. |
Problem
Our rent reclamation is currently an ad-hoc script. The tip router operator should handle operations like these without a need for manual action.
Solution