Skip to content

Lazy attestation signature decompression #4398

Open
@michaelsproul

Description

@michaelsproul

Description

When I ran Lighthouse under a profiler a while ago, one of the main consumers of CPU time was attestation signature decoding, i.e. converting SSZ bytes off the wire into the AggregateSignature type that exists inside Attestation:

pub struct Attestation<T: EthSpec> {
pub aggregation_bits: BitList<T::MaxValidatorsPerCommittee>,
pub data: AttestationData,
pub signature: AggregateSignature,
}

One of the reasons this is expensive is that encoded signatures are compressed, and decompression is slow.

With changes like #3493 set to land soon, many of the aggregate attestations we receive on gossip could be ignored, before their signatures are even checked. This provides an opportunity to save some time that would be spent unnecessarily decompressing signatures.

Steps to resolve

We need to add a new type, or modify the existing types, so that aggregate attestations are initially decoded using a signature: SignatureBytes field instead of an AggregateSignature. One way to do this would be:

  • Add a new LazyAttestation type which uses SignatureBytes for the signature instead of AggregateSignature. Add a method on this type that converts it to a regular Attestation by calling .decompress() on the bytes, and then converts to an aggregate signature using From.
  • Add a new LazySignedAggregateAndProof type which wraps LazyAttestation. Make sure that the gossip decoding uses this type instead of the full SignedAggregateAndProof.
  • Work out how to skip decompression for unviable attestations before they reach the batch signature verification step here. This will be the hardest part. We might want to add a step which filters subsets/duplicate aggregates out of the batch before signature verification. This might still waste a bit of time doing decompression if aggregates in the same batch are redundant, but that's probably an acceptable trade-off.

Hopefully these changes might result in lower CPU usage on low-end hardware. Machines with low validator counts that are only subscribed to a few subnets will likely see the most improvement, as the majority of attestations they verify are aggregates

Metadata

Metadata

Assignees

No one assigned

    Labels

    consensusAn issue/PR that touches consensus code, such as state_processing or block verification.optimizationSomething to make Lighthouse run more efficiently.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions