Draft
Conversation
Introduce data structures for consensus-derived randomness using commit-reveal scheme: - Add ExtendedPosition struct with consensus targets (txSetHash, commitSetHash, entropySetHash) and pipelined leaves (myCommitment, myReveal) - operator== excludes leaves to allow convergence with unique leaves - add() includes ALL fields to prevent signature stripping attacks - Add EstablishState enum for sub-phases: ConvergingTx, ConvergingCommit, ConvergingReveal - Update Consensus template to use Adaptor::Position_t - Add Position_t typedef to RCLConsensus::Adaptor and test CSF Peer This is the foundational data structure work for the RNG implementation. The gating logic and entropy computation will follow.
- Serialize full ExtendedPosition in share() and propose() - Deserialize ExtendedPosition in PeerImp using fromSerialIter() - Add harvestRngData() to collect commits/reveals from peer proposals - Conditionally call harvest via if constexpr for test compatibility
- Add clearRngState() call in startRoundInternal - Reset estState_ in closeLedger when entering establish phase - Implement three-phase RNG checkpoint gating: - ConvergingTx: wait for quorum commits, build commitSet - ConvergingCommit: reveal entropy, transition immediately - ConvergingReveal: wait for reveals or timeout, build entropySet - Use if constexpr for test framework compatibility
…layer Add protocol definitions for consensus-derived entropy pseudo-transaction: - ttCONSENSUS_ENTROPY = 105 transaction type - ltCONSENSUS_ENTROPY = 0x0058 ledger entry type - keylet::consensusEntropy() singleton keylet (namespace 'X') - applyConsensusEntropy() handler in Change.cpp - Added to isPseudoTx() in STTx.cpp The entropy value is stored in sfDigest field of the singleton ledger object. This provides the protocol foundation for same-ledger entropy injection.
…istration - Implement injectEntropyPseudoTx() to combine reveals into final entropy hash and inject as pseudo-tx into CanonicalTXSet in doAccept() - Modify BuildLedger applyTransactions() to apply entropy tx FIRST before all other transactions to prevent front-running - Remove redundant explicit threading in applyConsensusEntropy() as sfPreviousTxnID/sfPreviousTxnLgrSeq are set automatically by ApplyStateTable::threadItem() - Register ttCONSENSUS_ENTROPY in applySteps.cpp dispatch tables (preflight, preclaim, calculateBaseFee, apply) - Add ltCONSENSUS_ENTROPY to InvariantCheck.cpp valid type whitelist
- Register ConsensusEntropy amendment (Supported::yes, DefaultNo) - Gate entropy pseudo-tx injection behind amendment in doAccept() - Gate preflight with temDISABLED when amendment not enabled - Bump numFeatures 90 -> 91 - Exclude featureConsensusEntropy from default test environment to avoid breaking existing test transaction count assumptions
… entropy Three critical fixes that unblock the RNG commit-reveal pipeline: - Remove entropy secret regeneration in ConvergingTx->ConvergingCommit transition that was overwriting the onClose() secret, breaking reveal verification against the original commitment - Change ExtendedPosition operator== to compare txSetHash only, preventing deadlock where nodes transitioning sub-states at different times would break haveConsensus() for all peers - Self-seed own commitment and reveal into pending collections so the node counts toward its own quorum checks Also adds ExtendedPosition_test with signing, suppression, serialization round-trip and equality tests, iterator safety fix in BuildLedger, wire compatibility early-return, and RNG debug logging throughout the pipeline.
During ConvergingCommit and ConvergingReveal sub-states, poll at 250ms instead of the default 1s ledgerGRANULARITY. This reduces total RNG pipeline overhead from ~3s to ~500ms while keeping the normal heartbeat interval unchanged for all other consensus phases.
…t/entropySet Build real ephemeral (unbacked) SHAMaps for commitSet and entropySet using ttCONSENSUS_ENTROPY entries with tfEntropyCommit/tfEntropyReveal flags. Reuse InboundTransactions pipeline for peer fetch/diff/merge with no new classes. Encode NodeID in sfAccount to avoid master-vs-signing key mismatch. Add isPseudoTx guard in ConsensusTransSetSF to prevent pseudo-tx submission. Route acquired RNG sets via isRngSet/gotRngSet in NetworkOPs mapComplete.
Cache active UNL NodeIDs per round from UNL Report (in-ledger), falling back to getTrustedMasterKeys() on fresh testnets. Reject non-UNL validators at all entry points: harvestRngData, buildCommitSet, buildEntropySet, and handleAcquiredRngSet.
Prevents spoofed SHAMap entries by embedding verifiable proof blobs (proposal signature + metadata) in each commit/reveal entry via sfBlob. - Store ProposalProof in harvestRngData (peers) and propose() (self) - serializeProof: pack proposeSeq/closeTime/prevLedger/position/sig - verifyProof: reconstruct signingHash, verify against public key - Embed proofs in buildCommitSet/buildEntropySet via sfBlob field - Verify proofs in handleAcquiredRngSet (both diff and visitLeaves paths) - Add stall fix: gate ConvergingTx on timeout when commits unavailable - Clear proposalProofs_ in clearRngState
During ConvergingTx all RNG data arrives via proposal leaves, so fetching a peer's commitSet before we've built our own just generates unnecessary traffic. Only fetch commitSetHash once in ConvergingCommit+, and entropySetHash once in ConvergingReveal.
…servers Prevent grinding attacks by verifying sha512Half(reveal, pubKey, seq) matches the stored commitment before accepting a reveal. Also move cacheActiveUNL() into startRound so non-proposing nodes (exchanges, block explorers) correctly accept RNG data instead of diverging with zero entropy.
Add rngPIPELINE_TIMEOUT (3s) to replace ledgerMAX_CONSENSUS (10s) in the commit/reveal quorum gates. Late-joining nodes enter as proposing=false and cannot contribute commitments until promoted, so waiting beyond a few seconds just delays the ZERO-entropy fallback and penalizes recovery. Add inline comments documenting the late-joiner constraint and SHAMap sync's role as a dropped-proposal safety net.
…shold on active UNL When fewer participants are present than the quorum threshold, skip the RNG commit wait immediately instead of waiting the full pipeline timeout. Also base the quorum on activeUNLNodeIds_ (UNL Report with fallback) instead of the full trusted key set, so the denominator reflects who is actually active on the network.
…t resource charging - Change hasMinimumReveals() to wait for reveals from ALL committers (pendingCommits_.size()) instead of 80% quorum. The commit set is deterministic, so we know exactly which reveals to expect. rngPIPELINE_TIMEOUT remains the safety valve for crash/partition. Fixes reveal-set non-determinism causing entropy divergence on 15-node testnets. - Resource manager: preserve port for loopback addresses so local testnet nodes each get their own resource bucket instead of all sharing one on 127.0.0.1 (causing rate-limit disconnections). - Make RNG fast-poll interval configurable via XAHAU_RNG_POLL_MS env var (default 250ms) for testnet tuning.
…se entry Previously rngPIPELINE_TIMEOUT (3s) was measured from round start, meaning txSet convergence could eat into the reveal budget. Now reveals get their own rngREVEAL_TIMEOUT (1.5s) measured from the moment we enter ConvergingReveal, ensuring consistent time for reveal collection regardless of how long txSet convergence took.
…seq=0 proofs Wait for commits from last round's proposers (falling back to activeUNL on cold boot) instead of 80% of UNL. This ensures all nodes build the commitSet at the same moment with the same entries. Split proof storage: commitProofs_ (seq=0 only, deterministic) and proposalProofs_ (latest with reveal, for entropySet). Previously the proof blob contained whichever proposeSeq was last seen, causing identical commits to produce different SHAMap hashes across nodes. 20-node testnet: all nodes now produce identical commitSet hashes.
…zero entropy When expected proposers don't all arrive before rngPIPELINE_TIMEOUT, check if we still have quorum (80% of UNL). If so, build the commitSet with available commits and continue to reveals. Only fall back to zero entropy when truly below quorum. Previously any missing expected proposer caused a full timeout with zero entropy for that round. Now: kill 3 of 20 nodes → one 3s timeout round per kill but entropy preserved (17/16 quorum met).
Update inline comment to reflect that hasQuorumOfCommits() checks expected proposers first, with 80% of active UNL as fallback.
…ptive quorum setExpectedProposers() now filters incoming proposers against the on-chain UNL Report, preventing non-UNL nodes from inflating the expected set and causing unnecessary timeouts. quorumThreshold() uses expectedProposers_.size() (recent proposers ∩ UNL) when available, falling back to full UNL Report count on cold boot. This adapts to actual network conditions rather than relying on a potentially stale UNL Report that over-counts offline validators. Renamed activeUNLNodeIds_/cacheActiveUNL/isActiveUNLMember to unlReportNodeIds_/cacheUNLReport/isUNLReportMember to make the on-chain data source explicit.
Port the Hook API surface from the tt-rng branch, adapted to use our commit-reveal consensus entropy (ltCONSENSUS_ENTROPY / sfDigest). Hook APIs: - dice(sides): returns random int [0, sides) from consensus entropy - random(write_ptr, write_len): fills buffer with 1-512 random bytes Internal fairRng() derives per-execution entropy by hashing: ledger seq + tx ID + hook hash + account + chain position + execution phase + consensus entropy + incrementing call counter. This ensures each call within a single hook execution returns different values. Quality gate: fairRng returns empty (TOO_LITTLE_ENTROPY) if fewer than 5 validators contributed, preventing weak entropy from being consumed by hooks. Also adds sfEntropyCount and sfLedgerSequence to the consensus entropy SLE and pseudo-tx, enabling the freshness and quality checks needed by the Hook API.
Generate deterministic entropy in standalone mode so Hook APIs (dice/random) work for testing. Add test suite verifying SLE creation on ledger close.
- ADD_HOOK_FUNCTION for dice/random (was defined+declared but not registered) - Relax fairRng() seq check to allow previous ledger entropy (open ledger) - Add hook tests: dice range, random fill, consecutive calls differ - TODO: open-ledger entropy semantics need further thought
Add dice/random externs, TOO_LITTLE_ENTROPY error code, sfEntropyCount field code, and ttCONSENSUS_ENTROPY transaction type to hook SDK headers.
Standalone synthetic entropy produces identical dice(6) results for consecutive calls due to hash collision mod 6. Switch to dice(1000000) and add diagnostic output for return code debugging.
clang-14 (CI) does not implement P2036R3 — structured bindings cannot be captured by lambdas. Use explicit .first/.second instead.
Address findings from code review: - dice(0): add early return with INVALID_ARGUMENT before modulo operation to prevent undefined behavior - fromSerialIter: return std::optional to safely reject malformed payloads (truncated, unknown flag bits, trailing bytes) instead of throwing - Update all callers (PeerImp, RCLConsensus, tests) for optional - Add unit tests for dice(0) error code and 7 malformed wire cases
Add @@start/@@EnD comment markers to key RNG pipeline sections for automated documentation extraction. No logic changes.
Add @@start/@@EnD comment markers to pseudo-tx submission filtering, fast-polling, local testnet resource bucketing, and test environment gating. No logic changes.
# Conflicts: # src/ripple/app/hook/Guard.h # src/ripple/app/hook/applyHook.h # src/ripple/app/tx/impl/SetHook.cpp
# Conflicts: # hook/extern.h # src/ripple/app/hook/hook_api.macro # src/ripple/protocol/Feature.h # src/ripple/protocol/impl/Feature.cpp
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
featureConsensusEntropy: Decentralized Secure Randomness
Adding randomness to deterministic consensus sounds simple until you try to do it without breaking safety. This PR implements Same-Ledger Usable Randomness: finalizing entropy after user intent is locked, but before normal execution in that same ledger.
🔎 Review Scope
featureConsensusEntropyisDefaultNo; behavior is inert until enabled by amendment vote.ConsensusEntropy_test(Hookdice()/random(), fallback semantics) andExtendedPosition_test(serialization compatibility and malformed wire cases).🛠 How It Works (The Final Solution)
The architecture centers on converging on signed input sets rather than voting on a derived output hash. This ensures that every node can independently verify and reconstruct the final result.
1. Transport: Piggybacked Proposals
The
ConsensusProposalwire format is extended viaExtendedPosition. Most entropy data (commitments and reveals) flows through existing proposal gossip with low incremental payload overhead on the fast path, while consensus latency cost comes from the added sub-state progression/timeouts.ExtendedPosition::operator==only compares thetxSetHash. RNG sub-state differences never stall the core consensus on user transactions.2. Pipelined Sub-states
RNG progression runs inside internal
establishsub-states. These are checkpoints within the existing consensus cadence:ConvergingTx: Normal transaction convergence while harvesting entropy commitments.ConvergingCommit: Locking thecommitSetonce an 80% quorum of trusted proposers is reached.ConvergingReveal: Targets reveals from 100% of known committers, bounded by timeout/fallback paths (including the 1.5s reveal timeout) to preserve liveness.3. SHAMap Union Convergence
Harvested commitments and reveals are stored in ephemeral, unbacked SHAMaps.
InboundTransactionspipeline to fetch only the missing leaves from peers.4. Synthetic Injection & Same-Ledger Execution
Once reveals are collected, the final entropy is computed deterministically (
sha512Half(sorted_reveals)).buildLCL(Ledger Construction), the node locally synthesizes attCONSENSUS_ENTROPYpseudo-transaction.⚓ Hook API Integration
Provides two new deterministic WebAssembly APIs for Hook developers:
dice(sides): Returns a fair integer from0tosides-1.random(write_ptr, write_len): Fills a buffer with cryptographically secure consensus-derived randomness.🛡 Safety & Liveness
🛠 Infrastructure & Support Logic
Several non-obvious plumbing changes were required to make the RNG pipeline robust and testable:
1. Fast Polling during RNG Transitions
To reduce the latency impact of the extra sub-states, the heartbeat timer accelerates to 250ms (tunable via
XAHAU_RNG_POLL_MS) while in the RNG pipeline.📍
src/ripple/app/misc/NetworkOPs.cpp:992-10052. Local Testnet Resource Charging
Connections from
127.0.0.1normally share a single IP resource bucket. This change preserves the port for loopback addresses so that local multi-node testnets don't hit peer resource limits due to the increased RNG set traffic.📍
src/ripple/resource/impl/Logic.h:113-1173. Test Environment Gating
featureConsensusEntropyis excluded from defaultjtx::Envtests to prevent its automatic pseudo-tx injection from breaking existing test suites that rely on specific transaction counts.📍
src/test/jtx/Env.h:86-894. Pseudo-transaction Filtering
Internal metadata (commits/reveals) is stored as pseudo-transactions in ephemeral SHAMaps for transport. This logic ensures they are filtered out and never submitted to the actual transaction processing engine.
📍
src/ripple/app/ledger/ConsensusTransSetSF.cpp:67-71Guided Code Review (Projected Source)
This section follows runtime order so the code reads as a story, not a file dump.
1) Proposal payload:
ExtendedPositioncarries RNG sidecar fieldsExtendedPositionadds commit/reveal set identities and per-validator leaves while keeping tx-set identity explicit.Non-obvious:
operator==compares onlytxSetHashon purpose. That decouples core tx-set convergence from RNG sub-state drift, so RNG disagreements cannot deadlock transaction consensus.operator==(equality firewall):📍
src/ripple/app/consensus/RCLCxPeerPos.h:109-144add()(signed serialization of all sidecar fields):📍
src/ripple/app/consensus/RCLCxPeerPos.h:149-178fromSerialIter()(legacy + extended wire decode):📍
src/ripple/app/consensus/RCLCxPeerPos.h:199-2422) Harvest stage: trust boundary + reveal verification
Incoming RNG data is rejected for non-UNL senders and reveals are accepted only if they match prior commitments.
Non-obvious: this is where "no-commit, no-reveal" is enforced. A reveal without a recorded commitment is dropped to block late reveal grinding.
📍
src/ripple/app/consensus/RCLConsensus.cpp:1791-18663) Quorum basis: expected proposers first, UNL fallback
This quorum helper is the bridge between ideal participation and real network conditions.
Non-obvious: quorum is not a blind static UNL count in steady-state; expected proposers drive the fast path and UNL membership is the safety fallback.
📍
src/ripple/app/consensus/RCLConsensus.cpp:1154-11694) State-machine checkpoints:
ConvergingTx -> ConvergingCommit -> ConvergingRevealThis is the core sub-state progression inside establish; it gates commit quorum, reveal publication, timeout fallback, and entropy-set readiness.
Non-obvious: this is where liveness bounds are enforced (impossible quorum, pipeline timeout, reveal timeout) without stalling tx-set consensus, including the "timeout but still quorum => partial commit-set proceed" path.
📍
src/ripple/consensus/Consensus.h:1413-15935) SHAMap construction: commit/reveal sets with proof blobs
These build the ephemeral unbacked SHAMaps and embed proposal proof blobs (
sfBlob) used for verification on fetch/merge.Non-obvious: commit-set and entropy-set proof sources are intentionally different (
commitProofs_for commit leaves,proposalProofs_for reveal leaves) to keep set construction deterministic across nodes.📍
src/ripple/app/consensus/RCLConsensus.cpp:1272-1323📍
src/ripple/app/consensus/RCLConsensus.cpp:1331-13796) Injection stage (A): final entropy selection with deterministic fallback
Entropy is selected from verified reveals, with explicit standalone/zero fallback behavior for liveness.
Non-obvious: standalone mode deliberately uses synthetic deterministic entropy for local/dev usability; production safety semantics come from the non-standalone commit/reveal path. Zero entropy is the deterministic liveness fallback when entropy fails.
📍
src/ripple/app/consensus/RCLConsensus.cpp:1692-17457) Injection stage (B): build and enqueue
ttCONSENSUS_ENTROPYThe pseudo-tx is synthesized with deterministic fields and inserted into the canonical set.
Non-obvious:
sfEntropyCountis part of the contract. It lets consumers distinguish healthy entropy contribution depth from fallback/low-participation rounds. Also, insertion intoretriableTxsis legacy naming compatibility with the existing build/apply pipeline.📍
src/ripple/app/consensus/RCLConsensus.cpp:1749-17698) Build stage: entropy pseudo-tx executes before normal transactions
BuildLedger::applyTransactionsappliesttCONSENSUS_ENTROPYfirst so in-ledger consumers can read it.Non-obvious: this is the "same-ledger usability" guarantee point. If this ordering moves,
dice()/random()semantics shift immediately.📍
src/ripple/app/ledger/impl/BuildLedger.cpp:108-1459) Apply stage: write consensus entropy into the singleton ledger object
The transactor updates
keylet::consensusEntropy()with digest/count/ledger-sequence deterministically.Non-obvious: singleton-key write means there is exactly one consensus-entropy state target per ledger; deterministic write target is as important as deterministic value.
📍
src/ripple/app/tx/impl/Change.cpp:242-25810) Wire anchor: proposal message carrying extended payload bytes
TMProposeSetis the network envelope used for proposal payload transport.Non-obvious: despite the legacy field name/comment,
currenttxhashcarries serialized proposal-position bytes (ExtendedPosition) in this path; backward compatibility is preserved because the legacy 32-byte txSet-only form remains valid.📍
src/ripple/proto/ripple.proto:218-234[Architectural Retrospective]
The Road to Consensus-Native Randomness: A Retrospective
A narrative history of how the RNG architecture evolved from early
featureRNGexperiments into the finalfeatureConsensusEntropydesign.Adding randomness to deterministic consensus sounds simple until you try to do it without breaking safety.
Consensus requires determinism: every honest node must compute the same state transition.
Randomness requires unpredictability: nobody should know the final value early enough to game it.
The requirement that made this hard was not just "randomness," but same-ledger usable randomness: finalize entropy after user intent is locked, but before normal execution in that same ledger.
That path was not linear.
Part I: What the First Branch Taught Us (
featRNG)The initial branch was aggressively practical: reuse existing transaction paths, avoid deep consensus surgery, and move fast.
Experiment 1:
ttRNGlooked straightforward, then failed quicklyThe earliest model used a single transaction path (
ttRNG) with validator-generated entropy.It failed for a concrete reason: entropy bytes entered open-ledger transaction flow too early.
That made the randomness path mempool-observable and timing-sensitive, so sophisticated actors could condition behavior around visible entropy before the round was fully sealed.
Very quickly, the branch moved toward a dual-model design (
ttENTROPY+ttSHUFFLE) to try to close that timing gap.Experiment 2: dual-model defense (
ttENTROPY+ttSHUFFLE)The next design split responsibilities:
ttENTROPY: a UNL Validator Transaction (UVTx) — zero fee, seq=0, signed by the validator's ephemeral key, validated by UNLReport membership — used to submit blinded entropy hashes and later reveal them.ttSHUFFLE: a pseudo-transaction that derived extra entropy from proposal signatures, timed to land after the transaction set was frozen.Conceptually, this was smart defense-in-depth. Operationally, it hit three structural problems:
Experiment 3: mitigation hacks and why they still were not enough
Deterministic self-shuffle and piggyback variants improved specific failure modes. They did not remove the deeper issue: the model remained timing-sensitive and complex under real asynchronous behavior.
This was the "env var city" period (
XAHAU_SELF_SHUFFLE,XAHAU_PIGGYBACK_SHUFFLE, and brieflyXAHAU_AUTO_ACCEPT_SHUFFLES): useful for exposing failure boundaries, but also a clear signal that the architecture was being patched against the grain of consensus.Experiment 4: dedicated shuffle phase (
Open -> Establish -> Shuffle -> Validate)The branch then tried full structural separation: a top-level shuffle phase, custom RNG message flow (
TMRNGProposal), anRNGServicemanaging commits/reveals in simplestd::maps, and aforceRevealPhase()sync point to keep nodes aligned.This delivered one lasting insight: contributors should be tied to actual recent consensus participants (the seed of later expected-proposer logic).
But the phase itself was abandoned:
The conclusion from Part I was precise:
commit/reveal was the right cryptographic primitive, but the transport/convergence model was wrong.
The final commit on the initial branch landed one more practical insight: entropy participation should track actual establish-round participation (
establishProposers), not just static UNL membership. That expected-participant logic survived into the final architecture even as the dedicated shuffle phase did not.Part II: The Trap We Nearly Chose (Scalar Opinion Convergence)
The seductive simplification was to treat entropy like any disputed scalar:
"let nodes publish their computed entropy value and avalanche-converge on the majority."
A lightweight discrete-event simulator (
sim/rng_sim.cpp) was built to pressure-test this assumption under realistic latency and packet asymmetry. (This was a quick prototype model, not a faithful rippled consensus simulator — but it was sufficient to expose the core pathology.)This fails for a reason that became impossible to ignore:
When node A computes from set
S_Aand node B fromS_B, andS_A != S_B, their scalars are unrelated.At that point, you face a bad fork in design philosophy:
The final architecture deliberately chose option 2.
Part III: The Clean-Slate Branch (
featureConsensusEntropy)The new branch started as consensus documentation. That documentation work clarified failure boundaries so sharply that it became a from-scratch implementation effort.
This was not a rename exercise. It was selective reconstruction:
In other words: the primitive survived, the convergence model changed.
Hooks-facing RNG APIs such as
dice()andrandom()were among the pieces carried forward and finalized in this architecture.Breakthrough 1: converge on inputs, not output opinions
The core shift was simple and profound:
do not vote on final entropy values; converge on signed input sets.
Breakthrough 2: proposal-carried leaves + set identities
ExtendedPositioncarries:myCommitmentmyRevealcommitSetHashentropySetHashFast path: normal proposal traffic carries most of what nodes need.
Safety net: SHAMap-backed set identity enables deterministic reconciliation when packets drop or nodes lag.
Breakthrough 3: equality firewall
ExtendedPosition::operator==comparestxSetHashonly.That keeps core Tx-set convergence from being held hostage by RNG sub-state timing differences while still allowing entropy sub-state convergence to proceed and reconcile.
Breakthrough 4: sub-states, not a top-level RNG phase
Instead of adding another global phase boundary, the design runs RNG progression inside establish sub-states:
ConvergingTxConvergingCommitConvergingRevealThis preserved the existing consensus cadence while integrating entropy convergence where it belongs.
Breakthrough 5: SHAMap union convergence
Union merge is monotonic: sets grow as verified leaves arrive.
Scalar opinions can oscillate; verified set growth does not.
And SHAMap mechanics keep reconciliation practical:
So overhead is low on the golden path, with bounded recovery cost when reconciliation is needed.
Part IV: Hardening Moves That Made It Viable
The architecture became robust only after concrete hardening steps, each forced by a specific failure mode observed during testnet runs:
sfBlobin SHAMap entries): without these, any peer could inject spoofed commit/reveal entries during a Cold Path fetch. Embedding the proposal signature makes every contribution independently verifiable.sha512Half(reveal, pubKey, seq) == commitment): without this, a validator could commit to one value and reveal another (grinding attack).Important nuance:
Concrete progression: before the reveal-convergence fixes, a 15-node testnet produced 7 distinct entropy values in the same round (nodes collected different 80% subsets of reveals). After these hardening steps, 20-node testnets reported identical commit-set hashes and ~2.2s convergence with bounded recovery under node loss. (That ~2.2s came from aggressively tuned low-ms settings, including
XAHAU_RNG_POLL_MSand tight timeout windows; broader production topologies may need larger windows.)Part V: The Masterstroke (
ttCONSENSUS_ENTROPY)Once nodes converge on the relevant verified input set, final entropy is computed deterministically (
sha512Half(sorted_reveals)) and injected as a synthetic pseudo-transaction:ttCONSENSUS_ENTROPY.This injection happens locally in
doAccept(), right before ledger construction. The pseudo-transaction is sorted to execute first inBuildLedger.cpp, so all user transactions and Hooks executing in that same ledger block can consume the entropy via thedice()andrandom()WebAssembly APIs.Why no final gossip round on the derived scalar?
Because gossip resolves disagreements.
At this point, the system has already converged on verified inputs; the output function is deterministic. Forcing an extra opinion round adds delay and bandwidth cost without cryptographic benefit.
If a node suffers a local fault and synthesizes the wrong pseudo-tx, its resulting ledger hash will mismatch the network supermajority. Its validations will fail, and it will safely fork off and fetch the correct ledger from peers. Ledger safety is preserved by the validation phase, not the deliberation phase.
Safety and Liveness Framing
A useful framing that survived all iterations:
This matches the formal XRPL LCP framing in Chase & MacBrough (2018): Example 5 captures the key intuition that deliberation outcomes can vary, while fork safety itself is anchored by validation-phase overlap conditions formalized in Theorem 8.
This safety claim is specifically about ledger agreement, not about maximum entropy strength under adversarial withholding.
That distinction prevented a lot of category errors in design discussions.
Closing
The final
featureConsensusEntropyarchitecture is "least-bad" in the engineering sense:From
ttRNGto dual-tx entropy, to dedicated shuffle phase, to scalar-convergence rejection, the trajectory kept pointing to the same destination:commit/reveal inputs, SHAMap set identity, union reconciliation, deterministic synthetic injection, and bounded fallback behavior.