You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs: Add status and limitations section to README
- Add explicit experimental status
- Document known limitations (proof composition, security assumptions)
- Reference ZKTorch as alternative approach
- Clarify when to use this project vs alternatives
- Update implementation section with proof composition note
Copy file name to clipboardExpand all lines: README.md
+46-4Lines changed: 46 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,19 +2,22 @@
2
2
3
3
Extension of [zkml](https://github.com/uiuc-kang-lab/zkml) for distributed proving using Ray, layer-wise partitioning, and Merkle trees.
4
4
5
-
> **Note:** This project is under active development. See [Next Steps](#next-steps) for current progress.
5
+
> **⚠️ Status Note:** This is an experimental research project. For production zkML, consider [ZKTorch](https://github.com/uiuc-kang-lab/zktorch) (from the same research group) which uses proof accumulation/folding for parallelization. See [Status and Limitations](#status-and-limitations) for details.
6
6
7
-
## Next Steps
7
+
## Completed Milestones
8
8
9
9
1.~~**Make Merkle root public**: Add root to public values so next chunk can verify it~~ Done
10
10
2.~~**Complete proof generation**: Connect chunk execution to actual proof generation ([#8](https://github.com/ray-project/distributed-zkml/issues/8))~~ Done
11
11
3.~~**Ray-Rust integration**: Connect Python Ray workers to Rust proof generation ([#9](https://github.com/ray-project/distributed-zkml/issues/9))~~ Done
12
12
4.~~**GPU acceleration**: ICICLE GPU backend for MSM operations ([#10](https://github.com/ray-project/distributed-zkml/issues/10))~~ Done - see [GPU Acceleration](#gpu-acceleration)
13
13
14
+
**Note**: This project is experimental. For production zkML, see [ZKTorch](https://github.com/uiuc-kang-lab/zktorch) or [Status and Limitations](#status-and-limitations).
15
+
14
16
---
15
17
16
18
## Table of Contents
17
19
20
+
-[Status and Limitations](#status-and-limitations)
18
21
-[Overview](#overview)
19
22
-[Implementation](#implementation)
20
23
-[Requirements](#requirements)
@@ -25,6 +28,42 @@ Extension of [zkml](https://github.com/uiuc-kang-lab/zkml) for distributed provi
25
28
26
29
---
27
30
31
+
## Status and Limitations
32
+
33
+
### Project Status
34
+
35
+
This project implements a **Ray-based distributed proving approach** for zkML. It is experimental research code and should be considered:
36
+
37
+
-**Research/Educational**: Useful for studying alternative approaches to zkML parallelization
38
+
-**Not Production-Ready**: Missing formal security analysis and proof composition
39
+
-**Superseded**: The same research group (UIUC Kang Lab) has released [ZKTorch](https://github.com/uiuc-kang-lab/zktorch), which uses proof accumulation/folding for parallelization
40
+
41
+
### Known Limitations
42
+
43
+
1.**Proof Composition**: This implementation generates separate proofs per chunk. It does not implement recursive proof composition or aggregation. Verifiers must check O(n) proofs rather than O(1), limiting succinctness.
44
+
45
+
2.**Security Assumptions**: The distributed trust model (Ray workers) is not formally analyzed. The README does not address:
46
+
- Malicious worker resistance
47
+
- Collusion resistance
48
+
- Byzantine fault tolerance
49
+
50
+
3.**Scalability**: No published benchmarks comparing distributed vs. single-node performance. The approach inherits base zkml limitations (~30-80M parameter ceiling for halo2-based circuits).
51
+
52
+
### When to Use This
53
+
54
+
**Consider this project if:**
55
+
- Researching alternative zkML parallelization approaches
56
+
- Need examples of Ray integration for cryptographic workloads
57
+
- Studying Merkle-based privacy for intermediate computations
58
+
- Building distributed halo2 proving (not zkML-specific)
59
+
60
+
**Use alternatives instead if:**
61
+
- Need production-ready zkML → Use [ZKTorch](https://github.com/uiuc-kang-lab/zktorch) or [EZKL](https://github.com/zkonduit/ezkl)
62
+
- Require formal security guarantees → Use frameworks with proven composition
63
+
- Need state-of-the-art performance → ZKTorch achieves 10min GPT-2 proofs vs. ~1 hour for base zkml
64
+
65
+
---
66
+
28
67
## Overview
29
68
30
69
This repository extends zkml (see [ZKML paper](https://ddkang.github.io/papers/2024/zkml-eurosys.pdf)) with distributed proving capabilities. zkml provides an optimizing compiler from TensorFlow to halo2 ZK-SNARK circuits.
@@ -51,6 +90,8 @@ distributed-zkml adds:
51
90
3.**Merkle Commitments**: Hash intermediate outputs with Poseidon, only root is public
52
91
4.**On-Chain**: Publish only the Merkle root (O(1) public values vs O(n) without)
53
92
93
+
**Note**: Each chunk produces a separate proof. This implementation does not aggregate proofs into a single succinct proof. Verifiers must check all chunk proofs individually (O(n) verification time). For single-proof aggregation, see [ZKTorch](https://github.com/uiuc-kang-lab/zktorch)'s accumulation-based approach.
94
+
54
95
\`\`\`
55
96
Model: 9 layers -> 3 chunks
56
97
Chunk 1: Layers 0-2 -> GPU 1 -> Hash A
@@ -206,5 +247,6 @@ Runs on PRs to \`main\`/\`dev\`: builds zkml, runs tests (~3-4 min). GPU tests e
0 commit comments