Skip to content

Fix for #7122 Avoid attempting to serve BlobsByRange RPC requests on Fulu slots #7328

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: unstable
Choose a base branch
from

Conversation

SunnysidedJ
Copy link

Issue Addressed

#7122 Avoid attempting to serve BlobsByRange RPC requests on Fulu slots

Proposed Changes

Added a check if Fulu slot is requested to BlobsByRange RPC handler.

Additional Info

Please see comments below

@SunnysidedJ SunnysidedJ requested a review from jxs as a code owner April 17, 2025 01:22
@SunnysidedJ
Copy link
Author

A question left for penalizing peers: #7122 (comment)

@SunnysidedJ
Copy link
Author

Test in progress as commented on the test file

@eserilev eserilev added das Data Availability Sampling fulu Required for the upcoming Fulu hard fork labels Apr 17, 2025
@@ -1,4 +1,4 @@
#![cfg(not(debug_assertions))] // Tests are too slow in debug.
//#![cfg(not(debug_assertions))] // Tests are too slow in debug.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unwanted diff

RpcErrorResponse::InvalidRequest,
"Req including Fulu slots",
))
}
Copy link
Collaborator

@dapplion dapplion Apr 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, rethinking this maybe this is too punishing? Why not allow the request to creep into Fulu and just check that the start slot is in Deneb. It's fine to return empty for slots in Fulu.

Lighthouse only does by_range requests for a single epoch, but other clients may have different logic.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree that it could be bit too punishing. The issue was vague in a way that we didn't set how much punishment we will give. Do you think it is okay to accept the request as long as it contains pre-Fulu slots?

Also, it seems like there's no check from Lighthouse to not send Fulu slots as left on the comment #7122 (comment). So, we could potentially be penalized by ourselves

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feels like something that should be clarified in the spec to align behaviour on all clients. Do you want to raise an issue to the specs?

Copy link
Author

@SunnysidedJ SunnysidedJ Apr 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure! Just raised a PR to the specs ethereum/consensus-specs#4286

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed as per the spec PR to not punish and return empty for Fulu slots

@@ -7146,6 +7146,11 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
let end_slot = start_slot.saturating_add(count);
let mut roots = vec![];

// let's explicitly check count = 0 since it's a public function for readabiilty purpose.
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original code handles edge cases.

@@ -880,8 +880,16 @@ impl<T: BeaconChainTypes> NetworkBeaconProcessor<T> {
"Received BlobsByRange Request"
);

if req.count == 0 {
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, let's handle corner cases explicitly at some point

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will apply the same to other similar functions in this file if agreed

let fulu_epoch = self.chain.spec.fulu_fork_epoch.unwrap();
let fulu_start_slot = fulu_epoch.start_slot(T::EthSpec::slots_per_epoch());
// See the justification for the formula in PR https://github.com/sigp/lighthouse/pull/7328
req.count = fulu_start_slot.as_u64().saturating_sub(req.start_slot);
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Justification:
Let's consider every case one-by-one.

  1. Range requested lies entirely on pre-Fulu
    Case req.start_slot + req.count <= fulu_start_slot then
    req.count = req.count
  2. Range requested lies entirely on Fulu
    Case req.start_slot >= fulu_start_slot then
    req.count = 0
  3. Range requested lies on both pre-Fulu and Fulu
    Case req.start_slot < fulu_start_slot then
    req.count = req.count - ((req.start_slot + req.count) - fulu_start_slot)
    req.count = fulu_start_slot - req.start_slot

Union of all cases results in req.count = fulu_start_slot.saturating_sub(req.start_slot)

@SunnysidedJ
Copy link
Author

SunnysidedJ commented Apr 30, 2025

Current status: Issue-specific tests undergoing
Implemented as per the latest spec PR but can easily changeable to other options mentioned in the PR

@SunnysidedJ
Copy link
Author

Current status: Issue-specific tests undergoing

Having some questions atm

Sync-related questions

  1. Lighthouse nodes should not request to sync Fulu slots. Where do we handle this if so?
  2. Do we automatically request data columns instead?

If sync is to be implemented I'd happy to work on, but seems better to do it after #7352

Other

  1. Do we need to implement for BlobsByRoot as well?
  2. The call self.get_block_roots_for_slot_range() within the function handle_blobs_by_root_request_inner() contributes to the metric BEACON_PROCESSOR_GET_BLOCK_ROOTS_TIME. Is this something wanted?
  3. The function get_block_roots_for_slot_range() isn't error-ing out when the requested range is beyond the slots the client has (I might be wrong). Is this also expected behaviour?
  4. In the line in handle_blobs_by_root_request_inner() match self.chain.get_blobs(&root) {, does it actually return any blobs for Fulu slots? If this is guaranteed the change won't be needed(?)

@jimmygchen
Copy link
Member

jimmygchen commented May 5, 2025

  1. Do we automatically request data columns instead?

Yes. When making range request, we determine the batch type here:

let batch_type = network.batch_type(epoch);

  1. Do we need to implement for BlobsByRoot as well?

Oh yeah, we should not try to serve BlobsByRoot for fulu slots

  1. The call self.get_block_roots_for_slot_range() within the function handle_blobs_by_root_request_inner() contributes to the metric BEACON_PROCESSOR_GET_BLOCK_ROOTS_TIME. Is this something wanted?

I don't see it being used in handle_blobs_by_root_request_inner, it's only used in handing by range requests.

  1. The function get_block_roots_for_slot_range() isn't error-ing out when the requested range is beyond the slots the client has (I might be wrong). Is this also expected behaviour?

Yeah it doesn't error out. It just doesn't return blocks for the slots that the node doesn't have.

  1. In the line in handle_blobs_by_root_request_inner() match self.chain.get_blobs(&root) {, does it actually return any blobs for Fulu slots? If this is guaranteed the change won't be needed(?)

It doesn't return blobs, because there's no blobs in Fulu slot.
However for the blobs_by_range method, we do waste resources doing lookup on the block roots before calling get_blobs. We shouldn't attempt to do this if the requested slot is in Fulu.

let block_roots =
self.get_block_roots_for_slot_range(req.start_slot, req.count, "BlobsByRange")?;

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
das Data Availability Sampling fulu Required for the upcoming Fulu hard fork work-in-progress PR is a work-in-progress
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants