-
Notifications
You must be signed in to change notification settings - Fork 866
Fix for #7122 Avoid attempting to serve BlobsByRange RPC requests on Fulu slots #7328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: unstable
Are you sure you want to change the base?
Conversation
A question left for penalizing peers: #7122 (comment) |
Test in progress as commented on the test file |
@@ -1,4 +1,4 @@ | |||
#![cfg(not(debug_assertions))] // Tests are too slow in debug. | |||
//#![cfg(not(debug_assertions))] // Tests are too slow in debug. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unwanted diff
RpcErrorResponse::InvalidRequest, | ||
"Req including Fulu slots", | ||
)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, rethinking this maybe this is too punishing? Why not allow the request to creep into Fulu and just check that the start slot is in Deneb. It's fine to return empty for slots in Fulu.
Lighthouse only does by_range requests for a single epoch, but other clients may have different logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I agree that it could be bit too punishing. The issue was vague in a way that we didn't set how much punishment we will give. Do you think it is okay to accept the request as long as it contains pre-Fulu slots?
Also, it seems like there's no check from Lighthouse to not send Fulu slots as left on the comment #7122 (comment). So, we could potentially be penalized by ourselves
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feels like something that should be clarified in the spec to align behaviour on all clients. Do you want to raise an issue to the specs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure! Just raised a PR to the specs ethereum/consensus-specs#4286
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed as per the spec PR to not punish and return empty for Fulu slots
@@ -7146,6 +7146,11 @@ impl<T: BeaconChainTypes> BeaconChain<T> { | |||
let end_slot = start_slot.saturating_add(count); | |||
let mut roots = vec![]; | |||
|
|||
// let's explicitly check count = 0 since it's a public function for readabiilty purpose. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original code handles edge cases.
@@ -880,8 +880,16 @@ impl<T: BeaconChainTypes> NetworkBeaconProcessor<T> { | |||
"Received BlobsByRange Request" | |||
); | |||
|
|||
if req.count == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, let's handle corner cases explicitly at some point
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will apply the same to other similar functions in this file if agreed
let fulu_epoch = self.chain.spec.fulu_fork_epoch.unwrap(); | ||
let fulu_start_slot = fulu_epoch.start_slot(T::EthSpec::slots_per_epoch()); | ||
// See the justification for the formula in PR https://github.com/sigp/lighthouse/pull/7328 | ||
req.count = fulu_start_slot.as_u64().saturating_sub(req.start_slot); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Justification:
Let's consider every case one-by-one.
- Range requested lies entirely on pre-Fulu
Casereq.start_slot + req.count <= fulu_start_slot
then
req.count = req.count
- Range requested lies entirely on Fulu
Casereq.start_slot >= fulu_start_slot
then
req.count = 0
- Range requested lies on both pre-Fulu and Fulu
Casereq.start_slot < fulu_start_slot
then
req.count = req.count - ((req.start_slot + req.count) - fulu_start_slot)
req.count = fulu_start_slot - req.start_slot
Union of all cases results in req.count = fulu_start_slot.saturating_sub(req.start_slot)
Current status: Issue-specific tests undergoing |
Having some questions atm Sync-related questions
If sync is to be implemented I'd happy to work on, but seems better to do it after #7352 Other
|
Yes. When making range request, we determine the batch type here:
Oh yeah, we should not try to serve BlobsByRoot for fulu slots
I don't see it being used in
Yeah it doesn't error out. It just doesn't return blocks for the slots that the node doesn't have.
It doesn't return blobs, because there's no blobs in Fulu slot. lighthouse/beacon_node/network/src/network_beacon_processor/rpc_methods.rs Lines 920 to 921 in 39eb814
|
Issue Addressed
#7122 Avoid attempting to serve BlobsByRange RPC requests on Fulu slots
Proposed Changes
Added a check if Fulu slot is requested to BlobsByRange RPC handler.
Additional Info
Please see comments below