Skip to content

Add note about aggregate key in SyncCommittee #4185

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: dev
Choose a base branch
from

Conversation

b-wagn
Copy link
Contributor

@b-wagn b-wagn commented Mar 21, 2025

I observed that the aggregate_pubkey field of SyncCommittee is never used.
It is set here, but in verification (e.g., here) an aggregate pubkey representing a subset of the sync committee is assembled.

After discussion with @jtraglia and after looking at client implementations, it seems that only one client actually uses aggregate_pubkey (see here). To document these findings, I propose adding a note as in this PR, sketching the optimization.

Copy link
Member

@jtraglia jtraglia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, LGTM! Thanks for bringing this up.

Before merging, I'd appreciate @etan-status's opinion on this. This is accurate, right?

@mkalinin
Copy link
Contributor

Do we want providing an optimised method that utilises aggregate_pubkey? I think it is optional, but would be nice to have in the spec

Copy link
Contributor

@etan-status etan-status left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The optimization is correct, and is indeed used by Nimbus (both natively, as well as in the browser via WASM) to accelerate LC data verification.

As typically, the same sync committee members participate across sequential slots, one can further optimize by keeping the aggregate pubkey for the previous slot(s), and use that one as the starting point for the next slot (assuming only a handful of signers get added / removed from the participation, per slot). Nimbus does not currently do that second optimization.

The savings are actually quite significant on microcontrollers (e.g., Bluetooth chips), where aggregating keys locally can take seconds.

@jtraglia
Copy link
Member

Do we want providing an optimised method that utilises aggregate_pubkey?

@mkalinin That would be a nice to have, as long as it's relatively simple. I think we can proceed with notes (this PR) for now. I've made this new issue to track this request though:

The optimization is correct, and is indeed used by Nimbus

@etan-status Nice, thanks for confirming!

As typically, the same sync committee members participate across sequential slots, one can further optimize by keeping the aggregate pubkey for the previous slot(s), and use that one as the starting point for the next slot (assuming only a handful of signers get added / removed from the participation, per slot).

And oh, that's a neat optimization idea. I would be in favor of mentioning this, as I wouldn't expect this to be an obvious optimization for the average core developer.

@etan-status
Copy link
Contributor

as I wouldn't expect this to be an obvious optimization for the average core developer.

Not sure if that's necessary. For other aspects of the spec such as fork choice, practical implementations rely on proto array and there is no mention of any of it in the specs either. The additional optimization (beyond what this PR adds) is not even implemented in any of the clients (including nimbus). It shouldn't generally be the spec's job to explain how BLS can be optimized. But also, I'm not personally opposed to adding the extra ideas as well. It just appears slightly off for a normative spec document to add informal implementation ideas to it.

@jtraglia
Copy link
Member

It shouldn't generally be the spec's job to explain how BLS can be optimized. But also, I'm not personally opposed to adding the extra ideas as well. It just appears slightly off for a normative spec document to add informal implementation ideas to it.

Yeah, you're right. We don't normally mention optimizations in the specifications. I suppose the reason I want to call these out here (at least the first optimization) is because I believe this is the intention of the aggregate_pubkey field. It's clear that most clients do not do this, so there should be a call out.

@dapplion
Copy link
Member

We usually don't mention optimizations in the spec to keep it minimal. I would leave this optimizations as notes separate to the spec

@b-wagn
Copy link
Contributor Author

b-wagn commented Mar 25, 2025

From my side, we can remove the second note about the two optimizations. I am fine with that. But still, I think the first note is relevant because it explains why we have the aggregate_pubkey field. Without that note, it is very confusing that we have a field that is never used.

@mkalinin
Copy link
Contributor

We usually don't mention optimizations in the spec to keep it minimal. I would leave this optimizations as notes separate to the spec

In this particular case we already have a bit of optimization in the spec and in the protocol which is the aggregate_pubkey field and imho it is reasonable to make the spec utilise this field

@arnetheduck
Copy link
Contributor

make the spec utilise this field

this looks like the best option to me, ie rewrite the spec to actually perform said optimization. This has plagued the spec for the longest time, ie that it is not representative of the underlying computational complexity of the algorithms used, specially all the quadratic algorithms that trivially can be reduced to linear ones with a little bit of code reorgs caching some partially computed values.

Having accurate computational complexity in the spec is key for understanding the characteristics of performing a computation, much like gas pricing in eth1.

@jtraglia
Copy link
Member

jtraglia commented Apr 2, 2025

this looks like the best option to me, ie rewrite the spec to actually perform said optimization.

@etan-status @arnetheduck would one of you be interested in making a new PR with this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants