Skip to content

Use proposer index cache for block proposal #4313

Open
@michaelsproul

Description

@michaelsproul

Description

During block proposal we compute the proposer index from scratch here:

let proposer_index = state.get_beacon_proposer_index(state.slot(), &self.spec)? as u64;

Steps to resolve

We should use the beacon proposer cache, like this:

let cached_proposer = self
.beacon_proposer_cache
.lock()
.get_slot::<T::EthSpec>(shuffling_decision_root, proposal_slot);
let proposer_index = if let Some(proposer) = cached_proposer {
proposer.index as u64

On a cache miss we could either fall back to computing the index as we do now, or we could prime the cache using the available beacon state. The disadvantage of priming the cache is that it delays the getPayload request to the builder/execution layer. However, we might end up needing to prime the cache anyway. If we fix #4264 then gossip verification will try to prime the cache here:

debug!(
chain.log,
"Proposer shuffling cache miss";
"parent_root" => ?parent.beacon_block_root,
"parent_slot" => parent.beacon_block.slot(),
"block_root" => ?block_root,
"block_slot" => block.slot(),
);

Therefore I think we may as well try priming the cache if we miss. It's more future proof.

Metadata

Metadata

Assignees

Labels

consensusAn issue/PR that touches consensus code, such as state_processing or block verification.optimizationSomething to make Lighthouse run more efficiently.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions