feat(cac): add chunk-based message serialization for network transmission#63
Merged
feat(cac): add chunk-based message serialization for network transmission#63
Conversation
…sion Introduces chunked message types and HeapArray to handle large cryptographic data structures while staying within the 4 MiB network frame limit and avoiding LLVM optimization issues with large fixed-size arrays. Key changes: - Add `mosaic-heap-array` crate: heap-allocated fixed-size arrays that avoid LLVM optimization hangs with 250+ element arrays of complex types - Replace monolithic messages with chunk-based types in `mosaic-cac-types`: - `CommitMsgChunk`: 172 chunks (1 per wire), ~1.4 MB each - `ChallengeResponseMsgChunk`: 174 chunks (1 per circuit), ~1.68 MB each - `AdaptorMsgChunk`: 4 chunks (1 per deposit wire), ~1.6 MB each - `ChallengeMsg`: unchanged (small enough for single frame) - Update protocol type aliases to use HeapArray for large arrays: - `WideLabelWirePolynomialCommitments`, `WideLabelWireShares` - `CircuitInputShares`, `ChallengeIndices`, `WideLabelWireAdaptors` - Add comprehensive serialization tests and benchmarks - Implement `CanonicalSerialize`/`CanonicalDeserialize` for all message types - Reorganize net crates under `crates/net/` directory - Add architecture and network documentation Note: Protocol state machine (`cac/protocol`) requires further updates to handle chunk-based message flow.
Collaborator
|
Does this supersede #62? |
Collaborator
Please open an issue for this. |
Collaborator
AaronFeickert
left a comment
There was a problem hiding this comment.
I would be very cautious about support for unvalidated deserialization unless we're absolutely sure we need to support it for efficiency reasons. It's a huge footgun.
sapinb
reviewed
Feb 2, 2026
sapinb
reviewed
Feb 2, 2026
Collaborator
|
@Zk2u For future, please prefer smaller atomic commits rather than a single giant one. |
Add tests to verify that invalid curve points and scalars are properly rejected during deserialization with validation enabled. This covers: - Corrupted, truncated, and empty data for points - Out-of-range values and malformed data for scalars Addresses PR review feedback requesting coverage for invalid input handling.
Replace the ad-hoc example with a proper Criterion benchmark that provides: - Statistical analysis with multiple iterations - Throughput measurements (bytes/sec) - HTML reports in target/criterion/ - Comparison between compressed/uncompressed modes Addresses PR feedback about unnecessary print statements in benchmarks.
14 tasks
AaronFeickert
requested changes
Feb 3, 2026
Collaborator
AaronFeickert
left a comment
There was a problem hiding this comment.
I still think supporting unvalidated deserialization is a very bad idea, and we should remove it.
deserialization ark-serialize rejects values >= field order with InvalidData error, it does not reduce them mod field order as the comment incorrectly stated.
AaronFeickert
approved these changes
Feb 5, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
This PR introduces chunk-based message serialization for the CaC protocol to handle large cryptographic data structures efficiently. The changes address two critical constraints:
Key Changes
New
mosaic-heap-arraycrateVecto avoid stack allocation[PolynomialCommitment; 256])ark-serializetraits for seamless integrationChunk-based message types
CommitMsgChunk: 172 chunks (1 per input wire), ~2.76 MB uncompressed eachChallengeResponseMsgChunk: 174 chunks (1 per opened circuit), ~1.68 MB eachAdaptorMsgChunk: 4 chunks (1 per deposit wire), ~1.6 MB uncompressed eachChallengeMsg: Unchanged (fits in single frame at ~1.4 KB)Updated type aliases to use
HeapArrayfor large arrays:WideLabelWirePolynomialCommitments,WideLabelWireShares,CircuitInputSharesChallengeIndices,WideLabelWireAdaptors,AdaptorMsgChunkWithdrawalsTesting & benchmarks
Project organization
crates/net/directoryType of Change
Notes to Reviewers
crates/cac/protocol) still references the old monolithic message types and needs to be updated in a separate PR to use the new chunk-based message flow. This is intentionally left for @sapinb to handle as it involves state machine logic changes.Why chunks?
CommitMsgwith 172 wires × 256 polynomial commitments × 174 curve points) exceed the network frame limitSerialization strategy: We use uncompressed serialization (
Compress::No) to reduce computation cost. While compressed is smaller on the wire, decompressing curve points requires solvingy² = x³ + 7which is computationally expensive. Uncompressed avoids this overhead at the cost of ~2x larger payloads (still within frame limits with chunking). Runcargo run --example bench_serde -r -p mosaic-cac-typesto see the performance comparison.