This repository contains benchmarks of zero-knowledge proving systems and zkVMs. The goal is to continuously map the landscape of proving systems that are suitable for client-side environments.
All benchmark results are published on ethproofs.org quarterly, where they are rendered into a user-friendly comparison interface.
Client-side proving (CSP) is emerging as a requirement for privacy-preserving applications like anonymous credentials, private transfers, and more. Many systems claim to be “CSP-friendly,” but their real performance varies dramatically across four important dimensions:
- proving time
- peak memory
- proof size
- preprocessing size
This repository provides a unified benchmarking harness for comparing proving systems under identical conditions.
We aim to benchmark the canonical circuits that correspond to typical use cases or represent typical bottlenecks in client-side ZK proving, for example SHA-256, ECDSA, Keccak etc. The planned benchmarking scope can be found in this spreadsheet.
As of Q3 2025, we run benchmarks quarterly on a dedicated AWS mac2.metal host which has the following specifications:
- Apple M1 CPU with 8 cores (arm64 architecture)
- 16 GB RAM
This hardware is one of the closest to a client-side device among the available cloud options (i.e., close to iPhone chip, has no CUDA, has no AVX, etc.).
Our future plans include running the benchmarks in a cloud device farm (Android and iOS mobile devices).
-
Install the prerequisites
- Install Rust via
rustuptogether with the nightly toolchains used in CI:nightly-2025-08-18-aarch64-apple-darwin(default) plusnightly-2025-04-06for crates such asnexusandcairo-m. Add thellvm-tools,rustc-dev,rustfmt, andclippycomponents socargo benchmatches the workflow in.github/workflows/rust_benchmarks_parallel.yml. - Ensure
cargo,cmake, and a recentclang/lldare available (the helper actions under.github/actions/install-llvmshow the expected setup). Install Homebrew packagesbash,jq, andhyperfine, plus/usr/bin/timefor RAM measurements. - Install per-system toolchains as needed: OpenMPI for
polyhedra-expander, the Ligero prover stack forligetron, Noir version >=1.0.0-beta.13 forbarretenberg, and vendor SDKs such as RISC Zero, SP1, or OpenVM. Each folder documents its own bootstrap script and the matching GitHub Action (install-risc0,install-sp1,install-openvm, etc.) can be used as a reference.
- Install Rust via
-
Run the benchmarks
- Build the workspace once:
cargo build --release --workspace. - For Rust crates, run
BENCH_INPUT_PROFILE=full cargo bench
- Alternatively, to run individual benchmarks,
cdinto the crate directory and runBENCH_INPUT_PROFILE=full cargo bench. - Use
BENCH_INPUT_PROFILE=reducedto run with a reduced set of input sizes. - For non-Rust systems, build the utilities crate (
cargo build --release -p utils) and invokebenchmark.sh, e.g.BENCH_INPUT_PROFILE=full bash ./benchmark.sh --system-dir ./barretenberg --logging --quick(see sh_benchmarks_parallel.yml).
- Build the workspace once:
utils/– shared Rust crate that defines the benchmark harness, metadata about input sizes, common zkVM traits, and helper binaries (utils,collect_benchmarks,format_hyperfine).mobile/– mobile benchmarks for Android and iOS.benchmark.sh/measure_mem_avg.sh– orchestration scripts for non-Rust systems and RAM measurement.results/– storage for published benchmark results.- Rust proving system and zkVM crates such as
binius64/,plonky2/,polyhedra-expander/,provekit/, etc., each exposing a Criterion bench target registered through the shared harness. - Non-Rust proving system folders (
barretenberg/,ligetron/, etc.) that contain the shell scripts required bybenchmark.sh.
- Every benchmark run produces
{target}_{input}_{system}_[optional_feature]_metrics.json, following the schema implemented inutils::bench::Metrics: name, feature tag, target, input size, prove/verify wall-clock durations, optional execution cycles (for zkVMs), proof and preprocessing sizes, constraint counts, peak memory, and the descriptiveBenchPropertiesblock (classification, security level, audit status, ISA, etc.). - Peak memory is captured separately via
{target}_{input}_{system}_[optional_feature]_mem_report.json, which stores the average of 10/usr/bin/timesamples gathered bymeasure_mem_avg.sh. Non-Rust systems also emit{target}_{input}_sizes.jsonfor proof/preprocessing and update a sharedcircuit_sizes.jsonkeyed by target and input size. - For non-Rust systems, raw
hyperfine_{target}_{input}_*.jsonfiles are post-processed by theformat_hyperfinebinary so their timing data can be merged with the size, RAM, and constraint metadata. - When running in Github Actions, aggregated outputs are checked into
results/and uploaded to ethproofs.org.
- Rust benchmarks register with the
utils::define_benchmark_harness!macro (seeCONTRIBUTING.md). The harness iterates over the canonical input sizes defined inutils::metadata, executes Criterion benches for prove and verify, records metrics, and invokes the dedicated memory binary. - Non-Rust systems achieve the same by orchestrating
{target}_prepare.sh,{target}_prove.sh,{target}_verify.sh, and{target}_measure.shscripts in each system folder viabenchmark.sh. - Bench runs are parameterized by the
BENCH_INPUT_PROFILEenvironment variable (fullfor full range of input sizes,reducedfor PR/local smoke tests).
See CONTRIBUTING.md for detailed instructions on adding new Rust or non-Rust systems, registering benchmarks in the workspace, and supplying the metadata required by the shared harness and orchestrator.
https://pse.dev/projects/client-side-proving
We are thankful to
- zkID team for kickstarting the project
- EthProofs team for building the webpage and hosting the results
- EF Applied Cryptography Team for the unified zkVM interface library - ere
- Everyone who helped us refine the benchmarking methodology and the results