You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a braindump and conversation starter on the topic of: What is the likely critical path for Mikan, as a DA layer, to be successful? Put differently, this a braindump on Mikan system requirements.
I will take the perspective of system architecture, with some bias on the underlying infra (consensus, p2p). Will explicitly not think about product distribution (which is arguably more important).
These are the system's properties on the critical path to success, in rough descending order by importance:
1. Throughput
the network's primary job is to marshal big blocks and make them available
this means (a) the consensus engine may need to support features such as block streaming, optimistic block building, or compact blocks; and (b) the p2p layer will need to be optimized for efficiency (bandwidth conscious)
2. Verifiability of availability
beside blocks being big, it should be very convenient for clients to verify that blocks have been dispersed with (i) integrity and they are (ii) available for download
zk-friendliness and Rust-based stack helps here; this angle seems covered from the project's original principles and roadmap published to date
3. Latency
less important than throughput; still, ideally block latency should be sub-second
there's two big contributors to latency at a high level
(b) the # of phases in the consensus algorithm: here you're covered, as Malachite implements Tendermint, which has 3 one-way message delays (eg compared to HotStuff which involves 5-7 message delays, depending on variant)
4. Scale:
I can't think of a good reason why the network would require or benefit from a very large validator set (say over 100 validators or in the thousands)
there's a couple of reasons: (i) network size correlates inversely with throughput (bigger validator set -> lower throughput), and (ii) validator actions will be verifiable for integrity
I can see the need for a lower bound on validator set size: This defines the degree of redundancy for data blobs and may also affect data retrievability latency; I am not sure yet what's a good number here, however, potentially in the O(10) range
not sure either about incentivization model: a proof-of-authority model would be most straightforward and will get you fastest to a mainnet, and is also best at lower validator set size; alternatively, proof-of-stake with stake, unclear where stake would be managed (sovereign L1; or restaked from an L1?)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Context
This is a braindump and conversation starter on the topic of: What is the likely critical path for Mikan, as a DA layer, to be successful? Put differently, this a braindump on Mikan system requirements.
I will take the perspective of system architecture, with some bias on the underlying infra (consensus, p2p). Will explicitly not think about product distribution (which is arguably more important).
These are the system's properties on the critical path to success, in rough descending order by importance:
1. Throughput
2. Verifiability of availability
3. Latency
4. Scale:
Beta Was this translation helpful? Give feedback.
All reactions