Description
Description
When I ran Lighthouse under a profiler a while ago, one of the main consumers of CPU time was attestation signature decoding, i.e. converting SSZ bytes off the wire into the AggregateSignature
type that exists inside Attestation
:
lighthouse/consensus/types/src/attestation.rs
Lines 41 to 45 in c547a11
One of the reasons this is expensive is that encoded signatures are compressed, and decompression is slow.
With changes like #3493 set to land soon, many of the aggregate attestations we receive on gossip could be ignored, before their signatures are even checked. This provides an opportunity to save some time that would be spent unnecessarily decompressing signatures.
Steps to resolve
We need to add a new type, or modify the existing types, so that aggregate attestations are initially decoded using a signature: SignatureBytes
field instead of an AggregateSignature
. One way to do this would be:
- Add a new
LazyAttestation
type which usesSignatureBytes
for the signature instead ofAggregateSignature
. Add a method on this type that converts it to a regularAttestation
by calling.decompress()
on the bytes, and then converts to an aggregate signature usingFrom
. - Add a new
LazySignedAggregateAndProof
type which wrapsLazyAttestation
. Make sure that the gossip decoding uses this type instead of the fullSignedAggregateAndProof
. - Work out how to skip decompression for unviable attestations before they reach the batch signature verification step here. This will be the hardest part. We might want to add a step which filters subsets/duplicate aggregates out of the batch before signature verification. This might still waste a bit of time doing decompression if aggregates in the same batch are redundant, but that's probably an acceptable trade-off.
Hopefully these changes might result in lower CPU usage on low-end hardware. Machines with low validator counts that are only subscribed to a few subnets will likely see the most improvement, as the majority of attestations they verify are aggregates