Synthetic PoRep - no buffer between seal and commit #245
Replies: 12 comments
-
Thanks for the explanation. I have a few basic questions
|
Beta Was this translation helpful? Give feedback.
-
That is roughly it, this is what proofs code calls "vanilla proofs". They are nodes in the DAG structure of DRG "graph" proving that you know the path from the root to given leaves.
It should be around 26GiB which is ~7% of what needs to be stored today (11 layers ~ 350GiB), same as today with the 12x layers data, it can be removed after sector is committed on chain.
The SynthChallengesNumber is chosen such that grinding CommR is infeasible for an attacker (2^80 grinding attempts to reduce the security of PoRep by 1bit). Grinding CommR in itself is not an expensive process.'
Yes. Currently, we verify challenges in sets of 22 per SNARK proof and miners need to produce 8 "partition" proofs each proving a different set of challenges. We could reduce the number of Synthetic Challenges while maintaining the same protocol security by increasing the number of verified challenges (in increases of 22) but this incurs an additional cost of producing and verifying the Commit proof.
Increasing the number of verified challenges would require miners to produce more proofs following the same scheme, which does not require a trusted setup. The Synthetic PoRep protocol itself changes the manner in which challenges are generated, which also does not require circuit changes (chosen challenges are fed to the SNARK), and thus doesn't require another phase 2 trusted setup. |
Beta Was this translation helpful? Give feedback.
-
Back of the envelope calculation Assuming 275k challenges (number seems could go down). If you were to naively store all you need (there is a lot of repeated data in merkle trees) For each challenge, we need to store 104,128 bytes
For a total of ~26 GiB instead of 448 GiB |
Beta Was this translation helpful? Give feedback.
-
i think you mean batching |
Beta Was this translation helpful? Give feedback.
-
I believe it is referring to proof aggregation for ProveCommit - as right now miners need to keep huge sizes of files for C1 and occupying resources -> it prevents miners from aggregate the maximum/larger amount of proof as they can |
Beta Was this translation helpful? Give feedback.
-
yeah - read it again, it seems not like it will help with precom batching |
Beta Was this translation helpful? Give feedback.
-
It will help with PreCommit batching as well, as you don't need to keep the 448GB around until Commit. |
Beta Was this translation helpful? Give feedback.
-
if you miss something we are all doomed 🥇 will the 450Gb even be created and stored or does it just eliminates them completely "on the fly"? if they get created when exactly can it be deleted? |
Beta Was this translation helpful? Give feedback.
-
The 450GB are created as part of the sealing process, after sealing is completed, the Synthetic Challenges can be generated, vanilla proofs for these challenges can be computed (the 26GB Nicola is talking about) and then layers data can be removed leaving you with whatever amount of data is left after sector is Committed on-chain + the 26GB of vanilla proofs. The vanilla proofs can then be removed when the sector is Committed. |
Beta Was this translation helpful? Give feedback.
-
This looks great. With this research and implementation in progress, one thing I would like to bring up for discussion: I would suggest this can be discussed in our next co-dev meeting. @arajasek |
Beta Was this translation helpful? Give feedback.
-
I assume the extra 26GB vanilla proof will require extra time for seal, so basically it's a tradeoff between seal time and disk space, would be great if we can choose between retain layer files or remove layers but with extra vanilla proof generation step, considering there might be different hardware setup with enough disk for layer files. |
Beta Was this translation helpful? Give feedback.
-
From my estimates, the |
Beta Was this translation helpful? Give feedback.
-
Problem
The PoRep protocol today requires miners to store 12x sector size of layers data after it is created during the replication step until the sector is proven in the on-chain Commit step.
This leads to major inefficiencies where storage space needed to perform replication is occupied for a prolonged period of time by the layers data. This also prevents some miners from efficiently using aggregation.
@Kubuxu @nicola @lucaniz @rosariogennaro
Solution
Synthetic PoRep achieves reduction in used up space by reducing the set of challenges that might be chosen during the interactive Commit step from all possible challenges to some predetermined number that is feasible to precompute.
We propose a Synthetic PoRep protocol where:
Protocol Flow
The following section describes the high-level flow of the protocol. Differences between currently deployed PoRep and Synthetic PoRep are limited to challenge generation and additional capabilities for the miner.
SynthChallengesNumber
(currently ~270k) challenges Chall.SynthChallengesNumber
vanilla proofs and saves them for future use.VerifiedChallengesNumber
(176) challenges to be verified on-chain from Chall.VerifiedChallengesNumber
vanilla proofs which were generated earlier corresponding to selected challenges and computes SNARK proofs of these challenges.VerifiedChallengesNumber
challenges by selectingVerifiedChallengesNumber
indices out ofSynthChallengesNumber
and computing them.Parameter Selection
Based on the current analysis following parameters are under consideration:
Beta Was this translation helpful? Give feedback.
All reactions