CPU overhead of wrapping BLS points in tuples or custom types #1238
dkaidalov
started this conversation in
Core language features
Replies: 1 comment 3 replies
-
|
That's correct and expected. Points cannot be put uncompressed in tuples (or any data structures for that matter). So there's an implicit compression and uncompression happening everytime tuples are involved. If you need to pack some G1/G2 elements or ML results, this is typically where you'd use continuation and backpassing to avoid creating intermediate structures. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
While implementing a SNARK verifier in Aiken we've encountered a big CPU overhead when carrying BLS points (G1Element and G2Element) in tuples or custom types.
It seems putting a point in a tuple and then destructuring a tuple incurs additional decompression, which is very expensive.
Here are the tests:
From the benchmarks it looks like when putting in a tuple a point is compressed/decompressed under the hood.
It is quite unexpected to see such behavior, is it documented somewhere?
I don't see such overhead in Plinth.
Beta Was this translation helpful? Give feedback.
All reactions