Skip to content

Refactor for zkvm (phase 1) #1651

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 26 commits into
base: develop
Choose a base branch
from
Open

Refactor for zkvm (phase 1) #1651

wants to merge 26 commits into from

Conversation

noel2004
Copy link
Member

@noel2004 noel2004 commented May 7, 2025

Design rationality

According to the plan of proving cluster refactor, we make use of the zkVM paradigm, refactoring for a clear separation of "chain verification program codes" and "proving codes", so the project would be more modern, clean and easy to maintain.

As the first phase, we mainly plan to achieve following:

  • Put closely related codes together: most crates in zkvm-prover repo has been moved into mono-repo. The moved code base on 7247b42 of master branch

  • Refactoring scroll-proving-sdk, which would be a critical point for our next plan (prover do not rely on l2geth node), at the first phase, we move the accessing of l2geth from scroll-proving-sdk into prover, to keep most things unchange.

Progress:

  • zkvm-prover/circuits/types → scroll/common/types-rs
  • purge scroll-proving-sdk
  • scroll/zkvm-prover → scroll/zkvm-prover/bin
  • zkvm-prover/circuits → zkvm-circuits
  • zkvm-prover/prover → scroll/zkvm-prover
  • regression e2e tests
  • update dependencies of libzkp

Summary by CodeRabbit

  • New Features

    • Introduced a modular, workspace-based Rust project structure for Scroll zkVM prover, including new crates for chunk, batch, bundle circuits, and integration tests.
    • Added comprehensive configuration, build scripts, Dockerfile, and Makefiles for streamlined building, testing, and deployment.
    • Implemented new public APIs, traits, and data structures for circuit input types, cryptographic proofs, witness handling, and prover types.
    • Provided extensive integration tests, test data, and utilities for end-to-end proof generation and verification, including EVM-verifiable SNARK proofs.
    • Added detailed documentation and usage examples for developers.
  • Bug Fixes

    • Improved configuration management, including corrected placement of network settings in config files.
  • Documentation

    • Added thorough README files and in-code documentation across new modules and test data directories.
  • Style

    • Applied consistent code formatting and import organization, with new rustfmt configuration.
  • Chores

    • Added .gitignore files to exclude build artifacts and test outputs from version control.
    • Migrated and cleaned up dependency management in Cargo.toml files.
    • Reorganized imports and module declarations for clarity and maintainability.
  • Revert

    • Removed obsolete manifest files and redundant code from previous project structure.

Sorry, something went wrong.

Signed-off-by: noelwei <fan@scroll.io>
Signed-off-by: noelwei <fan@scroll.io>
Signed-off-by: noelwei <fan@scroll.io>
Signed-off-by: noelwei <fan@scroll.io>
Signed-off-by: noelwei <fan@scroll.io>
Copy link

coderabbitai bot commented May 7, 2025

Walkthrough

This update introduces a comprehensive Rust-based workspace for Scroll zkVM prover circuits, including new workspace and package manifests, modularized circuit input types, batch/chunk/bundle circuit logic, and integration tests. It adds Docker and build scripts, configuration files, and extensive test data. The codebase is restructured for modularity, versioning, and feature flag support, with new error handling, commitment constants, and proof abstractions.

Changes

File(s) / Path(s) Change Summary
.dockerignore, .gitignore, rust-toolchain, rustfmt.toml Added Docker, Rust toolchain, and formatting configuration files; set up ignore rules for build artifacts and toolchain.
Cargo.toml, build/.cargo/config.toml Introduced a new Rust workspace manifest with explicit package members, dependencies, patches, and performance profile; added local dependency patching for Scroll-related crates.
common/types-rs/** Added modular Rust crates for circuit input types (base, aggregation, batch, bundle, chunk), with detailed serialization, hashing, and validation logic for public inputs; included README and manifest files.
common/types/message/bundle-proof-sample.json Added a sample JSON file representing a cryptographic bundle proof for testing/validation.
zkvm-prover/Cargo.toml Deleted the previous monolithic prover package manifest.
zkvm-prover/Dockerfile, zkvm-prover/build-guest-actions-entrypoint.sh, zkvm-prover/build-guest.sh Added Dockerfile and build scripts for containerized guest program builds and artifact extraction; automated build pipeline.
zkvm-prover/build/** Added auto-generated Rust commitment constants, circuit configuration TOML files, and .gitignore rules for build outputs.
zkvm-prover/bin/** Introduced a new binary crate for the prover, with Makefile, manifest, and main logic; added request preprocessing for chunk proofs and modularized prover logic.
zkvm-prover/conf/config.json Moved "l2geth" network configuration to top-level in the JSON config.
zkvm-prover/integration/** Added integration test harness, utilities, and test data for chunk, batch, and bundle circuits; included test scripts, test witnesses, proof/result caching, and Solidity verifier contract for EVM tests.
zkvm-prover/prover/** Added new prover crate with modular error handling, proof abstractions, commitment constants, batch prover type, and public API surface for proof generation and verification.
common/libzkp/impl/Cargo.toml, common/libzkp/impl/src/verifier/euclid.rs, euclidv2.rs Updated dependency sources to use new workspace crates for prover/verifier logic; removed old patching and Git-based dependencies.

Sequence Diagram(s)

Loading
sequenceDiagram
    participant User
    participant ProverBin
    participant LocalProver
    participant RequestPreHandler
    participant RPCProvider
    participant ProverCore
    participant ProofGen
    participant Verifier

    User->>ProverBin: Run prover binary with config
    ProverBin->>LocalProver: Instantiate with config
    LocalProver->>RequestPreHandler: (If needed) Create from rpc_config
    LocalProver->>RequestPreHandler: on_request(ProveRequest)
    RequestPreHandler->>RPCProvider: Fetch chain data (for chunk proofs)
    RequestPreHandler->>LocalProver: Return processed ProveRequest
    LocalProver->>ProverCore: Prove(processed ProveRequest)
    ProverCore->>ProofGen: Generate proof
    ProofGen-->>ProverCore: Proof
    ProverCore-->>LocalProver: Proof
    LocalProver-->>ProverBin: Proof
    ProverBin->>Verifier: (Optional) Verify proof
    Verifier-->>ProverBin: Verification result

Poem

🐇
In the warren of code, new circuits arise,
With hashes and proofs, and tests that surprise.
Docker builds burrow deep,
Commitments we keep,
Integration hops on, with data to sweep.
Scroll’s zkVM garden,
Now blooms with delight—
Hooray for the circuits,
That verify right!
🥕

Note

⚡️ AI Code Reviews for VS Code, Cursor, Windsurf

CodeRabbit now has a plugin for VS Code, Cursor and Windsurf. This brings AI code reviews directly in the code editor. Each commit is reviewed immediately, finding bugs before the PR is raised. Seamless context handoff to your AI code agent ensures that you can easily incorporate review feedback.
Learn more here.


Note

⚡️ Faster reviews with caching

CodeRabbit now supports caching for code and dependencies, helping speed up reviews. This means quicker feedback, reduced wait times, and a smoother review experience overall. Cached data is encrypted and stored securely. This feature will be automatically enabled for all accounts on May 16th. To opt out, configure Review - Disable Cache at either the organization or repository level. If you prefer to disable all data retention across your organization, simply turn off the Data Retention setting under your Organization Settings.
Enjoy the performance boost—your workflow just got faster.

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Sorry, something went wrong.

@noel2004 noel2004 changed the title Refactor for zkvm Refactor for zkvm (phase 1) May 7, 2025
@noel2004 noel2004 requested a review from roynalnaruto May 7, 2025 02:41
noel2004 added 15 commits May 8, 2025 10:05
Signed-off-by: noelwei <fan@scroll.io>
Signed-off-by: noelwei <fan@scroll.io>
Signed-off-by: noelwei <fan@scroll.io>
Signed-off-by: noelwei <fan@scroll.io>
+ fmt
+ trivial updates

Signed-off-by: noelwei <fan@scroll.io>
Signed-off-by: noelwei <fan@scroll.io>
Signed-off-by: noelwei <fan@scroll.io>
Signed-off-by: noelwei <fan@scroll.io>
Signed-off-by: noelwei <fan@scroll.io>
Signed-off-by: noelwei <fan@scroll.io>
Signed-off-by: noelwei <fan@scroll.io>
Signed-off-by: noelwei <fan@scroll.io>
Signed-off-by: noelwei <fan@scroll.io>
@noel2004 noel2004 requested a review from colinlyguo May 16, 2025 03:48
@noel2004 noel2004 marked this pull request as ready for review May 16, 2025 03:53
@codecov-commenter
Copy link

codecov-commenter commented May 16, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 40.87%. Comparing base (fedfa04) to head (1464d10).

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #1651      +/-   ##
===========================================
- Coverage    40.91%   40.87%   -0.04%     
===========================================
  Files          225      225              
  Lines        18303    18303              
===========================================
- Hits          7488     7481       -7     
- Misses       10088    10100      +12     
+ Partials       727      722       -5     
Flag Coverage Δ
common 29.86% <ø> (ø)
coordinator 34.26% <ø> (-0.22%) ⬇️
database 42.05% <ø> (ø)
rollup 49.25% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 42

♻️ Duplicate comments (1)
zkvm-prover/integration/testdata/phase2/tasks/README.md (1)

1-7: Duplicate of phase1 README suggestions.
Phase 2’s README follows the same structure as phase 1. Please apply the identical grammar, heading-level, and terminology improvements recommended for phase1/tasks/README.md.

🧰 Tools
🪛 LanguageTool

[uncategorized] ~3-~3: A determiner appears to be missing. Consider inserting it.
Context: ...sk file here to execute some tests ### Following tests require batch task (naming as `ba...

(AI_EN_LECTOR_MISSING_DETERMINER)


[uncategorized] ~3-~3: You might be missing the article “a” here.
Context: ...some tests ### Following tests require batch task (naming as batch-task.json) + `t...

(AI_EN_LECTOR_MISSING_DETERMINER_A)


[uncategorized] ~3-~3: This verb may not be in the correct form. Consider using a different form for this context.
Context: ...### Following tests require batch task (naming as batch-task.json) + `test-execute-b...

(AI_EN_LECTOR_REPLACEMENT_VERB_FORM)


[misspelling] ~7-~7: Use “an” instead of ‘a’ if the following word starts with a vowel sound, e.g. ‘an article’, ‘an hour’.
Context: ... be generated into the output dir while a e2e test has completed

(EN_A_VS_AN)

🪛 markdownlint-cli2 (0.17.2)

3-3: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3

(MD001, heading-increment)

🧹 Nitpick comments (98)
rustfmt.toml (1)

1-1: Redundant comment_width when comments aren’t wrapped.
Since wrap_comments = false, the comment_width = 300 setting has no effect. Consider removing it to avoid confusion.

zkvm-prover/integration/testdata/flatten-trace.py (5)

1-2: Add script metadata and shebang

Include a shebang and module docstring at the top to make the script self-documenting and executable:

+#!/usr/bin/env python3
+"""Flatten signature and authorization fields in JSON test data for zkVM integration tests."""

4-8: Encapsulate file processing in main() and improve argument parsing

Rather than iterating over sys.argv directly and shadowing the built-in file, wrap the logic in a main() function, leverage argparse for clearer usage, and rename variables for clarity:

-import sys
-
-for f in sys.argv[1:]:
-    print("convert", f)
-    with open(f, 'r') as file:
-        data = json.load(file)
+import argparse
+
+def main():
+    parser = argparse.ArgumentParser(description="Flatten JSON test data for zkVM")
+    parser.add_argument("files", nargs="+", help="Paths to JSON files to process")
+    args = parser.parse_args()
+    for path in args.files:
+        print("convert", path)
+        with open(path, 'r') as fp:
+            data = json.load(fp)
+
+if __name__ == "__main__":
+    main()

This improves readability, error messages, and avoids shadowing built-ins.


9-14: Validate and handle missing signature fields

Calling pop() without checks can raise KeyError if any signature component is absent. Add validation or defaults to fail fast with a clear message:

required = ['r', 's', 'y_parity']
if not all(k in transaction for k in required):
    raise ValueError(f"Missing signature fields in transaction: {transaction}")
transaction['signature'] = {k: transaction.pop(k) for k in required}

15-21: Add error handling for authorization entries

Similarly, guard against malformed authorization objects to avoid unexpected crashes:

for auth in transaction.get("authorization_list", []):
    inner_keys = ['chain_id', 'address', 'nonce']
    if not all(k in auth for k in inner_keys):
        raise ValueError(f"Malformed authorization entry: {auth}")
    auth['inner'] = {k: auth.pop(k) for k in inner_keys}

24-26: Ensure consistent JSON output formatting

For predictable fixtures, include sorted keys and terminate the file with a newline:

-with open(f, 'w') as file:
-    json.dump(data, file, indent=2)
+with open(path, 'w') as fp:
+    json.dump(data, fp, indent=2, sort_keys=True)
+    fp.write('\n')
zkvm-prover/integration/testdata/phase2/proofs/README.md (3)

3-3: Fix markdown heading levels
The heading jumps from H1 (#) to H3 (###) at line 3; increment headings by only one level (H2).

- ### Some tests would try to generate the proofs it needed and cache them under this directory
+ ## Some tests would try to generate the proofs it needed and cache them under this directory
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

3-3: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3

(MD001, heading-increment)


5-5: Fix markdown heading levels
Similarly, change the H3 heading at line 5 to H2 for proper hierarchy.

- ### Use `make clean-test-cache` to clean the cached proof files
+ ## Use `make clean-test-cache` to clean the cached proof files

1-5: Clarify cache location for first-time users
Consider adding a note about where the cache directory is relative to the repo root, so new contributors know exactly where proof files are stored and how they interact with the test suite.

🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

3-3: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3

(MD001, heading-increment)

zkvm-prover/integration/testdata/phase1/proofs/README.md (2)

3-3: Fix markdown heading levels
The heading jumps from H1 to H3 at line 3; use H2 for proper hierarchy.

- ### Some tests would try to generate the proofs it needed and cache them under this directory
+ ## Some tests would try to generate the proofs it needed and cache them under this directory

5-5: Fix markdown heading levels
Similarly, change the H3 heading at line 5 to H2.

- ### Use `make clean-test-cache` to clean the cached proof files
+ ## Use `make clean-test-cache` to clean the cached proof files
zkvm-prover/build-guest-actions-entrypoint.sh (1)

1-4: Enhance script robustness with strict mode
Add strict error handling flags and use a more portable shebang to fail fast on errors.

- #!/bin/bash
+ #!/usr/bin/env bash
+ set -euo pipefail

- # run crates/build-guest
- cargo run --release -p scroll-zkvm-build-guest
+ # run crates/build-guest
+ cargo run --release -p scroll-zkvm-build-guest
zkvm-prover/integration/testdata/phase1/tasks/README.md (3)

1-1: Improve header clarity and grammar.
The top-level instruction is missing an article and could use a stronger verb.
Apply this diff:

-# Put task file here to execute some tests
+# Place a task file here to execute the integration tests

3-3: Adjust heading level and wording.
Use a second-level heading and include the article “a” before “batch task”.

-### Following tests require batch task (naming as `batch-task.json`)
+## The following tests require a batch task (named `batch-task.json`)
🧰 Tools
🪛 LanguageTool

[uncategorized] ~3-~3: A determiner appears to be missing. Consider inserting it.
Context: ...sk file here to execute some tests ### Following tests require batch task (naming as `ba...

(AI_EN_LECTOR_MISSING_DETERMINER)


[uncategorized] ~3-~3: You might be missing the article “a” here.
Context: ...some tests ### Following tests require batch task (naming as batch-task.json) + `t...

(AI_EN_LECTOR_MISSING_DETERMINER_A)


[uncategorized] ~3-~3: This verb may not be in the correct form. Consider using a different form for this context.
Context: ...### Following tests require batch task (naming as batch-task.json) + `test-execute-b...

(AI_EN_LECTOR_REPLACEMENT_VERB_FORM)

🪛 markdownlint-cli2 (0.17.2)

3-3: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3

(MD001, heading-increment)


7-7: Fix article usage and terminology.
Clarify “output dir” and correct the article before “e2e test”:

-A task file can be generated into the output dir while a e2e test has completed
+A task file can be generated in the output directory once an end-to-end (e2e) test completes
🧰 Tools
🪛 LanguageTool

[misspelling] ~7-~7: Use “an” instead of ‘a’ if the following word starts with a vowel sound, e.g. ‘an article’, ‘an hour’.
Context: ... be generated into the output dir while a e2e test has completed

(EN_A_VS_AN)

common/types-rs/README.md (1)

5-6: Fix grammar, style, and typo in the description
The second paragraph has subject-verb agreement issues, wordiness, and a typo ("tootlchain"). Consider merging into a single, clearer sentence:

-This crate help decoupling circuits with other crates and keep their dependencies neat and controllable. Avoiding to involve crates which is not compatible with the tootlchain of openvm from indirect dependency.
+This crate helps decouple circuits from other crates and keeps their dependencies neat and controllable, avoiding indirect dependencies on crates incompatible with the OpenVM toolchain.
🧰 Tools
🪛 LanguageTool

[uncategorized] ~5-~5: This verb does not appear to agree with the subject. Consider using a different form.
Context: ...epted by circuits as input. This crate help decoupling circuits with other crates a...

(AI_EN_LECTOR_REPLACEMENT_VERB_AGREEMENT)


[grammar] ~5-~5: The verb ‘Avoiding’ is used with the gerund form.
Context: ...eir dependencies neat and controllable. Avoiding to involve crates which is not compatible with the...

(ADMIT_ENJOY_VB)


[uncategorized] ~5-~5: This verb does not appear to agree with the subject. Consider using a different form.
Context: ...lable. Avoiding to involve crates which is not compatible with the tootlchain of o...

(AI_EN_LECTOR_REPLACEMENT_VERB_AGREEMENT)


[style] ~5-~5: Consider using “incompatible” to avoid wordiness.
Context: ...le. Avoiding to involve crates which is not compatible with the tootlchain of openvm from indi...

(NOT_ABLE_PREMIUM)

zkvm-prover/build/bundle-circuit/openvm.toml (1)

7-18: Document empty VM module sections
Each [app_vm_config.<module>] table is currently empty. To improve maintainability, consider adding comments or placeholders (e.g. # defaults applied) to clarify why these modules have no parameters and whether custom configs will be added in future phases.

common/types-rs/aggregation/Cargo.toml (1)

1-14: Explicit features & workspace integration

  • Add an explicit [features] section (e.g., default = []) to make it clear that there are no optional features.
  • Verify that this new crate is included in the root workspace’s members and that any local-path overrides (in build/.cargo/config.toml) cover it.
zkvm-prover/bin/src/zk_circuits_handler/euclid.rs (1)

4-9: Verify dependency declarations and import consistency

  • Confirm that the newly referenced crates (anyhow and scroll_proving_sdk) are declared in this crate’s Cargo.toml and that the import paths (scroll_proving_sdk::prover::ProveRequest) match the published package.
  • Align import grouping with project conventions (std → external → local).
common/types-rs/base/Cargo.toml (1)

20-21: Document empty default features
The explicit default = [] is good practice. Consider adding a brief comment to explain that all features must be opted-in, preventing unintended dependencies.

zkvm-prover/bin/src/prover.rs (2)

153-156: Consider simplifying the pre_handler initialization.

The lazy initialization pattern is correct, but using replace on an Option known to be None is unnecessary.

-        if self.pre_handler.is_none() {
-            self.pre_handler
-                .replace(RequestPreHandler::create(&self.config.rpc_config)?);
-        }
+        if self.pre_handler.is_none() {
+            self.pre_handler = Some(RequestPreHandler::create(&self.config.rpc_config)?);
+        }

153-156: Consider handling pre_handler initialization failures more robustly.

Currently, if initialization fails, the error is returned from the current call, but future calls will retry initialization. This might lead to repeated connection attempts if there's a persistent issue with the RPC endpoint.

Consider implementing a more comprehensive error handling strategy, such as:

  1. Adding a retry limit
  2. Implementing a backoff mechanism
  3. Adding a way to reset the initialization state after certain errors
+        let max_init_attempts = 3;
+        static mut INIT_ATTEMPTS: u32 = 0;
+        
         if self.pre_handler.is_none() {
+            unsafe {
+                if INIT_ATTEMPTS >= max_init_attempts {
+                    return Err(anyhow!("Failed to initialize pre_handler after {} attempts", max_init_attempts));
+                }
+                INIT_ATTEMPTS += 1;
+            }
             self.pre_handler = Some(RequestPreHandler::create(&self.config.rpc_config)?);
+            unsafe { INIT_ATTEMPTS = 0; } // Reset on success
         }

Note: The static counter is just one approach. A more idiomatic approach might be to add an initialization state field to the LocalProver struct.

zkvm-prover/download-release.sh (1)

9-24: Pre-create directories and add retries to downloads
Ensure destination directories exist before writing files and make wget more resilient to transient network issues:

+ mkdir -p crates/circuits/{chunk-circuit,batch-circuit,bundle-circuit}/openvm \
+          crates/verifier/testdata
  # chunk-circuit exe
- wget https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/chunk/app.vmexe \
-      -O crates/circuits/chunk-circuit/openvm/app.vmexe
+ wget --retry-connrefused --timeout=30 --tries=3 \
+      https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/chunk/app.vmexe \
+      -O crates/circuits/chunk-circuit/openvm/app.vmexe

Apply similar --retry-connrefused --timeout=30 --tries=3 flags to the other wget commands.

zkvm-prover/Dockerfile (1)

3-4: Combine Rustup commands to reduce image layers
Merge the two rustup invocations:

RUN rustup toolchain install nightly-2025-02-14-x86_64-unknown-linux-gnu && \
    rustup component add rust-src --toolchain nightly-2025-02-14-x86_64-unknown-linux-gnu
common/types-rs/chunk/src/public_inputs.rs (2)

14-15: Consider adding error handling for base_fee_per_gas

The .expect("base_fee_expected") call will panic if the base fee is not available. In production code, you might want to handle this more gracefully, perhaps by providing a default value or propagating the error.


15-24: Consider bounds checking for transaction counts

The conversions to u16 for num_txs and num_l1_msgs will panic if the counts exceed u16::MAX (65535). While this is unlikely in practice, you might want to add a safeguard to handle large transaction counts.

-            num_txs: u16::try_from(value.body().transactions.len()).expect("num txs u16"),
+            num_txs: u16::try_from(value.body().transactions.len()).unwrap_or(u16::MAX),
-            num_l1_msgs: u16::try_from(
-                value
-                    .body()
-                    .transactions
-                    .iter()
-                    .filter(|tx| tx.is_l1_message())
-                    .count(),
-            )
-            .expect("num l1 msgs u16"),
+            num_l1_msgs: u16::try_from(
+                value
+                    .body()
+                    .transactions
+                    .iter()
+                    .filter(|tx| tx.is_l1_message())
+                    .count(),
+            )
+            .unwrap_or(u16::MAX),
zkvm-prover/integration/testdata/phase2/witnesses/1.json (1)

31-32: Consider documenting the contract bytecode

The input field contains a large bytecode string. For better maintainability, consider adding a comment explaining what this contract does or referencing documentation that describes its purpose in the test.

zkvm-prover/bin/src/types.rs (1)

16-18: Consider adding documentation comments

While the field names are descriptive, adding documentation comments (///doc) would improve clarity about the purpose and format of each field, especially for new developers working with this code.

 #[derive(Debug, Clone, Serialize, Deserialize)]
 pub struct ChunkTaskDetail {
+    /// List of block hashes to be included in the chunk
     pub block_hashes: Vec<CommonHash>,
+    /// Hash of the previous message queue
     pub prev_msg_queue_hash: CommonHash,
+    /// Name of the fork to use for processing (e.g., "EuclidFork", "EuclidV2Fork")
     pub fork_name: String,
 }
zkvm-prover/integration/testdata/phase2/witnesses/5.json (1)

74-76: Consider adding documentation for code content

The codes array contains a single entry that appears to be a bytecode with some prefix. Consider adding a comment explaining what this code represents and its purpose in the test.

common/types-rs/batch/src/payload/mod.rs (1)

10-16: Consider adding static assertions for limits
It may be helpful to include a compile-time check (e.g., via const_assert!) to ensure N_DATA_BYTES_PER_COEFFICIENT <= 32 and that N_BLOB_BYTES does not overflow usize. This guards against accidental future changes breaking the invariants.

zkvm-prover/build/batch-circuit/openvm.toml (1)

17-21: Validate the large prime moduli used in cryptographic operations

The supported moduli are large prime numbers that appear to be for cryptographic operations. The first one appears to be for BLS12-381 curve and the second looks like a scalar field element.

Consider adding comments to document what each modulus is used for. This will improve maintainability when revisiting the code later.

[app_vm_config.modular]
supported_modulus = [
+    # BLS12-381 base field modulus
    "4002409555221667393417789825735904156556882819939007885332058136124031650490837864442687629129015664037894272559787",
+    # BLS12-381 scalar field modulus
    "52435875175126190479447740508185965837690552500527637822603658699938581184513",
]
common/types-rs/batch/src/header/mod.rs (2)

3-5: Consider adding version info to module docs

The code defines two version modules (v6 and v7) but doesn't document their differences.

Consider adding module-level documentation that briefly explains the differences between v6 and v7 implementations to improve maintainability.


21-30: Good use of enums and serialization attributes

The ReferenceHeader enum with rkyv serialization support is well-designed for versioning. The code properly uses derive macros and includes detailed comments.

Consider adding a method to convert from ReferenceHeader to an implementation of BatchHeader to make it easier to use the trait methods on the enum.

impl ReferenceHeader {
    pub fn as_batch_header(&self) -> &dyn BatchHeader {
        match self {
            Self::V6(header) => header,
            Self::V7(header) => header,
        }
    }
}
zkvm-prover/integration/testdata/phase2/witnesses/8.json (1)

63-65: Consider adding more test code examples for coverage

Only one code example is included. For comprehensive testing, consider adding more varied contract code examples.

More diverse code examples would help test edge cases in your code execution verification.

zkvm-prover/integration/testdata/phase1/witnesses/12508460.json (1)

22-41: Transaction data represents a simple value transfer

The test transaction is a standard value transfer (not a contract interaction). Consider adding test cases with contract interactions for more comprehensive coverage.

Adding test cases with contract creation, contract calls, and interactions with various opcodes would provide better coverage for your zkVM implementation.

zkvm-prover/integration/src/testers/batch.rs (5)

48-48: Remove debug print statement.

This debug println! should be removed as it's not necessary in production code and could clutter logs during test runs.

-    println!("proof_path: {:?}", proof_path);

13-30: Add documentation for the BatchProverTester struct.

The struct lacks documentation explaining its purpose and how it fits into the testing framework. Adding docstrings would improve code maintainability.

+/// A tester implementation for the batch prover that loads proving tasks from predefined test data files.
+/// This tester is used for integration tests that verify the batch circuit's proving and verification capabilities.
 pub struct BatchProverTester;

32-55: Add documentation for the BatchTaskBuildingTester struct.

Similar to the BatchProverTester, this struct would benefit from documentation explaining its purpose and relationship to other testers.

+/// A tester implementation for the batch prover that dynamically builds proving tasks
+/// from chunk tasks and proofs. This allows testing the batch circuit with freshly generated inputs.
 pub struct BatchTaskBuildingTester;

22-29: Consider enhanced error handling.

The current implementation simply propagates errors with ?. Consider adding more context to errors for easier debugging.

     fn gen_proving_task() -> eyre::Result<<Self::Prover as ProverType>::ProvingTask> {
-        Ok(read_json_deep(
-            Path::new(PATH_TESTDATA)
-                .join(phase_base_directory())
-                .join("tasks")
-                .join("batch-task.json"),
-        )?)
+        let task_path = Path::new(PATH_TESTDATA)
+            .join(phase_base_directory())
+            .join("tasks")
+            .join("batch-task.json");
+        
+        read_json_deep(&task_path)
+            .map_err(|e| eyre::eyre!("Failed to read batch task from {}: {}", task_path.display(), e))
     }

57-64: Enhance the test function.

The current test only verifies that parsing works, but doesn't assert anything about the parsed content. Consider adding more assertions to validate the task's properties.

 #[test]
 fn batch_task_parsing() {
     use scroll_zkvm_prover::task::ProvingTask;
 
     let task = BatchProverTester::gen_proving_task().unwrap();
 
-    let _ = task.build_guest_input().unwrap();
+    let guest_input = task.build_guest_input().unwrap();
+    
+    // Add assertions to verify the content of guest_input
+    assert!(!guest_input.is_empty(), "Guest input should not be empty");
+    
+    // Verify task properties
+    let task_id = task.identifier();
+    assert!(!task_id.is_empty(), "Task identifier should not be empty");
 }
zkvm-prover/integration/Cargo.toml (3)

31-34: Remove or uncomment commented-out dependencies.

The file contains commented-out dependencies that should either be removed if not needed or uncommented if they're required.

 rayon = "1"
 chrono = "0.4"
-#ff = "0.13"
 glob = "0.3"
 once_cell = "1.20"
-#sha2 = "0.10"
 tracing-subscriber = "0.3"

29-29: Specify exact version for rayon.

For better reproducibility and to prevent unexpected breaking changes, specify an exact version number for rayon instead of just a major version.

-rayon = "1"
+rayon = "1.8.0"

40-43: Document feature flags.

The purpose of the feature flags (euclidv2 and limit-logs) isn't documented. Adding comments would help users understand when to use these features.

 [features]
+# Enables Euclid V2 implementation
 default = ["euclidv2"]
 euclidv2 = []
+# Limits logging verbosity for performance-critical scenarios
 limit-logs = []
zkvm-prover/prover/src/commitments/mod.rs (1)

1-50: Clean organization of commitment constants with consistent module structure

The module provides a well-organized access point to various commitment constants used by different circuit types. The consistent structure across all public modules enhances maintainability.

Consider adding documentation to explain what these commitments represent and how they're used in the proving system, which would improve the code's self-documentation.

 mod batch_leaf_commit;
 mod batch_exe_commit;
 mod chunk_exe_commit;
 mod chunk_exe_rv32_commit;
 mod chunk_leaf_commit;
 mod bundle_leaf_commit;
 mod bundle_exe_commit;
 mod bundle_exe_euclidv1_commit;

+/// Commitment constants for batch circuit operations
 pub mod batch {
     use super::batch_exe_commit;
     use super::batch_leaf_commit;

+    /// Executable commitment for batch circuit
     pub const EXE_COMMIT: [u32; 8] = batch_exe_commit::COMMIT;
+    /// Leaf commitment for batch circuit
     pub const LEAF_COMMIT: [u32; 8] = batch_leaf_commit::COMMIT;
 }
common/types-rs/base/src/utils/hash.rs (3)

19-26: Add documentation for keccak256_rv32 function

While keccak256 has detailed documentation, keccak256_rv32 lacks any comments explaining its purpose or how it differs from the standard keccak256 function. Adding documentation would improve code maintainability.

+/// Computes the Keccak-256 hash of input bytes using the sha3 crate implementation.
+///
+/// This implementation is compatible with RISC-V 32-bit architecture.
 pub fn keccak256_rv32<T: AsRef<[u8]>>(bytes: T) -> B256 {
     use sha3::{Digest, Keccak256};
     let mut output = [0u8; 32];
     let mut hasher = Keccak256::new();
     hasher.update(bytes.as_ref());
     output.copy_from_slice(hasher.finalize().as_ref());
     B256::from(output)
 }

28-35: Add documentation for sha256_rv32 function

This function also lacks documentation explaining its purpose and RISC-V compatibility. Adding documentation would help developers understand when to use this function versus other hash functions.

+/// Computes the SHA-256 hash of input bytes using the sha2 crate implementation.
+///
+/// This implementation is compatible with RISC-V 32-bit architecture.
 pub fn sha256_rv32<T: AsRef<[u8]>>(bytes: T) -> B256 {
     use sha2::{Digest, Sha256};
     let mut output = [0u8; 32];
     let mut hasher = Sha256::new();
     hasher.update(bytes.as_ref());
     output.copy_from_slice(hasher.finalize().as_ref());
     B256::from(output)
 }

1-35: Consider explaining why different hash implementations are needed

The file includes three different hash functions using different libraries (tiny_keccak, sha3, and sha2). It would be helpful to add a module-level comment explaining why these different implementations are necessary and when each should be used.

+//! Cryptographic hash functions for circuit input types.
+//!
+//! This module provides multiple hash function implementations:
+//! - `keccak256`: Standard Keccak-256 implementation using tiny_keccak
+//! - `keccak256_rv32`: Keccak-256 implementation compatible with RISC-V 32-bit using sha3
+//! - `sha256_rv32`: SHA-256 implementation compatible with RISC-V 32-bit using sha2
+//!
+//! The different implementations are needed to ensure consistent hashing behavior
+//! between the host environment and the zkVM execution environment.
+
 use alloy_primitives::B256;
 use tiny_keccak::{Hasher, Keccak};
zkvm-prover/build-guest.sh (2)

41-65: Consider combining similar directory creation commands

The script creates multiple directories for openvm in separate commands. Consider consolidating these for clarity and efficiency.

 if [ -n "$(echo ${BUILD_STAGES} | grep stage3)" ]; then
   # copy exe commitments from container to local
+  mkdir -p build/chunk-circuit build/batch-circuit build/bundle-circuit
   for f in chunk-circuit/chunk_exe_commit.rs \
     chunk-circuit/chunk_exe_rv32_commit.rs \
     batch-circuit/batch_exe_commit.rs \
     bundle-circuit/bundle_exe_commit.rs \
     bundle-circuit/bundle_exe_euclidv1_commit.rs; do
     docker cp ${container_id}:/app/zkvm-circuits/crates/circuits/${f} build/${f}
   done

   # copy digests from container to local
+  mkdir -p build/bundle-circuit
   docker cp ${container_id}:/app/zkvm-circuits/crates/circuits/bundle-circuit/digest_1 build/bundle-circuit/digest_1
   docker cp ${container_id}:/app/zkvm-circuits/crates/circuits/bundle-circuit/digest_1_euclidv1 build/bundle-circuit/digest_1_euclidv1

   # copy app.vmexe from container to local
-  mkdir -p build/chunk-circuit/openvm
-  mkdir -p build/batch-circuit/openvm
-  mkdir -p build/bundle-circuit/openvm
+  mkdir -p build/{chunk,batch,bundle}-circuit/openvm
   docker cp ${container_id}:/app/zkvm-circuits/crates/circuits/chunk-circuit/openvm/app.vmexe build/chunk-circuit/openvm/app.vmexe
   docker cp ${container_id}:/app/zkvm-circuits/crates/circuits/chunk-circuit/openvm/app_rv32.vmexe build/chunk-circuit/openvm/app_rv32.vmexe
   docker cp ${container_id}:/app/zkvm-circuits/crates/circuits/batch-circuit/openvm/app.vmexe build/batch-circuit/openvm/app.vmexe
   docker cp ${container_id}:/app/zkvm-circuits/crates/circuits/bundle-circuit/openvm/app.vmexe build/bundle-circuit/openvm/app.vmexe
   docker cp ${container_id}:/app/zkvm-circuits/crates/circuits/bundle-circuit/openvm/app_euclidv1.vmexe build/bundle-circuit/openvm/app_euclidv1.vmexe
 fi

67-69: Add error handling for cleanup operations

The script should handle potential errors during cleanup to ensure it completes successfully even if the container removal fails.

 # remove docker container
-docker rm ${container_id}
-rm ./build-guest.cid
+docker rm ${container_id} || echo "Warning: Failed to remove container ${container_id}"
+rm ./build-guest.cid || echo "Warning: Failed to remove container ID file"
common/types-rs/bundle/src/witness.rs (1)

16-30: Add validation for consistency between batch_proofs and batch_infos

The implementation correctly maps archived proofs to AggregationInput objects, but there's no validation that ensures batch_proofs and batch_infos have the same length and correspond to each other.

Consider adding validation either in a constructor or in methods that use both fields together:

impl BundleWitness {
    /// Create a new BundleWitness, ensuring batch_proofs and batch_infos have the same length
    pub fn new(batch_proofs: Vec<AggregationInput>, batch_infos: Vec<BatchInfo>) -> Result<Self, Error> {
        if batch_proofs.len() != batch_infos.len() {
            return Err(Error::InvalidInput("batch_proofs and batch_infos must have the same length"));
        }
        Ok(Self { batch_proofs, batch_infos })
    }
}
zkvm-prover/bin/Makefile (2)

19-19: Consider adding error handling for the Plonky3 GPU version script

Unlike the ZKVM_VERSION and GO_TAG, there's no error handling if the script fails or is missing.

-PLONKY3_GPU_VERSION=$(shell ./print_plonky3gpu_version.sh | sed -n '2p')
+PLONKY3_GPU_VERSION=$(shell if [ -f ./print_plonky3gpu_version.sh ]; then ./print_plonky3gpu_version.sh | sed -n '2p'; else echo ""; fi)

42-43: Consider optimizing the test binary creation process

The cargo clean command forces a full rebuild which can be time-consuming. Also, the xargs command could fail silently if no files match the pattern.

-	cargo clean && cargo test --release --no-run
-	ls target/release/deps/prover* | grep -v "\.d" | xargs -I{} ln -sf {} ./prover.test
+	cargo test --release --no-run
+	found=$(ls target/release/deps/prover* | grep -v "\.d" || echo "")
+	if [ -n "$found" ]; then
+		echo "$found" | xargs -I{} ln -sf {} ./prover.test
+	else
+		echo "No test binaries found!"
+		exit 1
+	fi
common/types-rs/chunk/src/utils.rs (1)

34-48: Add documentation explaining the purpose of the manually_drop_on_zkvm macro

The macro has different implementations based on architecture, but there's no explanation of why this is necessary or what problems it solves.

+/// This macro conditionally wraps values in ManuallyDrop on specific architectures.
+/// 
+/// On non-x86/x86_64/aarch64 architectures (typically the zkVM environment), 
+/// it prevents automatic dropping of values which may be necessary for memory
+/// management in the zkVM context. On standard architectures, it passes values through
+/// unchanged for normal memory management behavior.
 #[macro_export]
 #[cfg(not(any(target_arch = "x86", target_arch = "x86_64", target_arch = "aarch64")))]
 macro_rules! manually_drop_on_zkvm {
common/types-rs/chunk/src/witness.rs (2)

18-25: Enhance documentation for ChunkWitness struct fields

While the struct has documentation, the explanation of each field's purpose and constraints could be more detailed.

 /// The witness type accepted by the chunk-circuit.
+/// 
+/// This structure contains all the necessary data for a chunk circuit to process
+/// and verify a series of blocks, including their states, transactions, and metadata.
 #[derive(
     Clone,
     Debug,
     serde::Deserialize,
     serde::Serialize,
     rkyv::Archive,
     rkyv::Deserialize,
     rkyv::Serialize,
 )]
 #[rkyv(derive(Debug))]
 pub struct ChunkWitness {
     /// The block witness for each block in the chunk.
+    /// These blocks contain the transactions, state changes, and blockchain metadata
+    /// that will be processed and verified by the circuit.
     pub blocks: Vec<BlockWitness>,
     /// The on-chain rolling L1 message queue hash before enqueueing any L1 msg tx from the chunk.
+    /// This hash represents the previous state of the message queue and is used for continuity
+    /// verification between chunks.
     pub prev_msg_queue_hash: B256,
     /// The code version specify the chain spec
+    /// This determines which set of rules and parameters will be used for processing the chunk.
     pub fork_name: ForkName,
 }

28-62: Consider memory optimization in the block deduplication process

The current implementation creates entirely new BlockWitness instances, which could be memory-intensive for large datasets. Consider a more memory-efficient approach.

The current implementation works correctly by filtering out duplicate states and codes across blocks. However, it creates full copies of all blocks and their data, which could be inefficient.

A possible optimization would be to implement a two-pass approach:

  1. First pass: identify unique states and codes
  2. Second pass: create the new blocks with references to the unique items

This would be more complex but could significantly reduce memory usage for large witnesses.

zkvm-prover/prover/src/lib.rs (1)

1-3: Consider explaining the purpose of the suppressed warnings

The #[allow(dead_code)] and #[rustfmt::skip] attributes are applied to the commitments module without explanation. While it's likely that this module contains code generated by build scripts or symlinked files that may not be fully used or formatted according to style guidelines, a brief comment would help clarify this design decision.

+// Commitments module contains build-generated constants that may not all be used
+// and follows a specialized formatting, hence the attributes
 #[allow(dead_code)]
 #[rustfmt::skip]
 mod commitments;
zkvm-prover/README.md (6)

23-23: Fix grammatical error: missing preposition

The sentence is missing the preposition "to" after "refer".

-For a deeper dive into our implementation, please refer the [interfaces](./docs/interfaces.md) doc.
+For a deeper dive into our implementation, please refer to the [interfaces](./docs/interfaces.md) doc.
🧰 Tools
🪛 LanguageTool

[uncategorized] ~23-~23: Possible missing preposition found.
Context: ...e into our implementation, please refer the interfaces doc....

(AI_HYDRA_LEO_MISSING_TO)


27-27: Fix grammatical error: missing preposition and comma

The sentence is missing the preposition "to" after "refer" and could benefit from a comma after "commands".

-For more commands please refer the [Makefile](./Makefile).
+For more commands, please refer to the [Makefile](./Makefile).
🧰 Tools
🪛 LanguageTool

[typographical] ~27-~27: Consider adding a comma here.
Context: ....md) doc. ## Testing For more commands please refer the Makefile. ### ...

(PLEASE_COMMA)


[uncategorized] ~27-~27: Possible missing preposition found.
Context: ...Testing For more commands please refer the Makefile. ### Build Gues...

(AI_HYDRA_LEO_MISSING_TO)


37-37: Clarify the child commitments overwrite process

The statement about child commitments being overwritten could be clearer about why this happens and what it means for developers.

-Upon building the guest programs, the child commitments in [batch-circuit](./crates/circuits/batch-circuit/src/child_commitments.rs) and [bundle-circuit](./crates/circuits/bundle-circuit/src/child_commitments.rs) will be overwritten by `build-guest`.
+Upon building the guest programs, the child commitments in [batch-circuit](./crates/circuits/batch-circuit/src/child_commitments.rs) and [bundle-circuit](./crates/circuits/bundle-circuit/src/child_commitments.rs) will be automatically updated by `build-guest` to match the latest executable commitments. This ensures consistency between the circuit and its dependencies.

42-42: Improve command examples by removing $ prefix

According to Markdown best practices, the $ prefix in command examples should be removed when not showing output.

-$ RUST_MIN_STACK=16777216 make test-single-chunk
+RUST_MIN_STACK=16777216 make test-single-chunk

Apply similar changes to the other command examples.

Also applies to: 48-48, 54-54

🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

42-42: Dollar signs used before commands without showing output
null

(MD014, commands-show-output)


90-94: Provide more concrete examples for BlockWitness

The example code uses placeholder comments for BlockWitness, which isn't very helpful for users. Consider providing a more concrete example with key fields populated.

 let block_witnesses = vec![
-    sbv::primitives::types::BlockWitness { /* */ },
-    sbv::primitives::types::BlockWitness { /* */ },
-    sbv::primitives::types::BlockWitness { /* */ },
+    sbv::primitives::types::BlockWitness {
+        // Example fields with representative values
+        block_hash: [0u8; 32], 
+        // Add other required fields here
+    },
+    // Additional witnesses...
 ];

124-128: Provide more concrete examples for BatchProvingTask fields

Similar to the BlockWitness issue, the example uses field names without showing their structure. Consider providing a more concrete example.

 let task = BatchProvingTask {
-    chunk_proofs, // chunk proofs being aggregated in this batch
-    batch_header, // the header for the batch
-    blob_bytes,   // the EIP-4844 blob that makes this batch data available on L1
+    chunk_proofs: vec![
+        // Example of what a chunk proof might look like
+        ChunkProof { /* relevant fields */ },
+    ],
+    batch_header: BatchHeader {
+        // Example fields with representative values
+        // ...
+    },
+    blob_bytes: vec![/* Example blob bytes */],
 };
zkvm-prover/prover/src/prover/batch.rs (2)

8-9: Add documentation for the missing [BatchCircuit] reference

The doc comment for BatchProver references a BatchCircuit type that isn't imported or defined in the visible code. Consider either importing this type or providing a fully qualified path.

-/// Prover for [`BatchCircuit`].
+/// Prover for batch circuits.
+/// 
+/// See the `scroll-zkvm-batch-circuit` crate for the circuit implementation.
 pub type BatchProver = Prover<BatchProverType>;

18-18: Consider documenting the SEGMENT_SIZE magic number

The SEGMENT_SIZE constant uses a somewhat unusual value: (1 << 22) - 100. While this is likely optimized for the specific requirements of batch proving, a brief comment explaining the rationale would be helpful.

-    const SEGMENT_SIZE: usize = (1 << 22) - 100;
+    // 4MB segment size minus 100 bytes to account for overhead
+    const SEGMENT_SIZE: usize = (1 << 22) - 100;
zkvm-prover/integration/tests/batch_circuit.rs (1)

71-71: Fix typo in comment

There's a small typo in the comment explaining the hash verification.

-    // verify the two task has the same blob bytes
+    // verify the two tasks have the same blob bytes
common/types-rs/base/src/public_inputs/batch.rs (3)

61-84: Consider using a builder pattern for hash construction.

The implementation of pi_hash_euclidv1 creates an intermediate Vec for each hash calculation. For performance optimization, consider implementing a builder pattern or utility function that can construct the hash without allocating a new vector each time.

fn pi_hash_euclidv1(&self) -> B256 {
-    keccak256(
-        std::iter::empty()
-            .chain(self.parent_state_root.as_slice())
-            .chain(self.parent_batch_hash.as_slice())
-            .chain(self.state_root.as_slice())
-            .chain(self.batch_hash.as_slice())
-            .chain(self.chain_id.to_be_bytes().as_slice())
-            .chain(self.withdraw_root.as_slice())
-            .cloned()
-            .collect::<Vec<u8>>(),
-    )
+    let mut buffer = Vec::with_capacity(
+        32 * 5 + 8 + 32  // 5 B256 values (32 bytes each) + u64 (8 bytes) + 1 B256
+    );
+    buffer.extend_from_slice(self.parent_state_root.as_slice());
+    buffer.extend_from_slice(self.parent_batch_hash.as_slice());
+    buffer.extend_from_slice(self.state_root.as_slice());
+    buffer.extend_from_slice(self.batch_hash.as_slice());
+    buffer.extend_from_slice(&self.chain_id.to_be_bytes());
+    buffer.extend_from_slice(self.withdraw_root.as_slice());
+    keccak256(buffer)
}

86-112: Apply the same optimization to pi_hash_euclidv2.

Similarly to the previous comment, pi_hash_euclidv2 would benefit from the same optimization to avoid creating a temporary vector through chained iterators.

fn pi_hash_euclidv2(&self) -> B256 {
-    keccak256(
-        std::iter::empty()
-            .chain(self.parent_state_root.as_slice())
-            .chain(self.parent_batch_hash.as_slice())
-            .chain(self.state_root.as_slice())
-            .chain(self.batch_hash.as_slice())
-            .chain(self.chain_id.to_be_bytes().as_slice())
-            .chain(self.withdraw_root.as_slice())
-            .chain(self.prev_msg_queue_hash.as_slice())
-            .chain(self.post_msg_queue_hash.as_slice())
-            .cloned()
-            .collect::<Vec<u8>>(),
-    )
+    let mut buffer = Vec::with_capacity(
+        32 * 7 + 8  // 7 B256 values (32 bytes each) + u64 (8 bytes)
+    );
+    buffer.extend_from_slice(self.parent_state_root.as_slice());
+    buffer.extend_from_slice(self.parent_batch_hash.as_slice());
+    buffer.extend_from_slice(self.state_root.as_slice());
+    buffer.extend_from_slice(self.batch_hash.as_slice());
+    buffer.extend_from_slice(&self.chain_id.to_be_bytes());
+    buffer.extend_from_slice(self.withdraw_root.as_slice());
+    buffer.extend_from_slice(self.prev_msg_queue_hash.as_slice());
+    buffer.extend_from_slice(self.post_msg_queue_hash.as_slice());
+    keccak256(buffer)
}

125-144:

❓ Verification inconclusive

Consider returning Result instead of assertions in validate.

The validate method uses assertions which will panic if validation fails. For better error handling in production code, consider returning a Result type with meaningful error messages instead of using assertions.

- fn validate(&self, prev_pi: &Self, fork_name: ForkName) {
-     assert_eq!(self.chain_id, prev_pi.chain_id);
-     assert_eq!(self.parent_state_root, prev_pi.state_root);
-     assert_eq!(self.parent_batch_hash, prev_pi.batch_hash);
-     assert_eq!(self.prev_msg_queue_hash, prev_pi.post_msg_queue_hash);
-
-     if fork_name == ForkName::EuclidV1 {
-         assert_eq!(self.prev_msg_queue_hash, B256::ZERO);
-         assert_eq!(prev_pi.prev_msg_queue_hash, B256::ZERO);
-         assert_eq!(self.post_msg_queue_hash, B256::ZERO);
-         assert_eq!(prev_pi.post_msg_queue_hash, B256::ZERO);
-     }
+ fn validate(&self, prev_pi: &Self, fork_name: ForkName) -> Result<(), ValidationError> {
+     if self.chain_id != prev_pi.chain_id {
+         return Err(ValidationError::ChainIdMismatch);
+     }
+     if self.parent_state_root != prev_pi.state_root {
+         return Err(ValidationError::StateRootMismatch);
+     }
+     if self.parent_batch_hash != prev_pi.batch_hash {
+         return Err(ValidationError::BatchHashMismatch);
+     }
+     if self.prev_msg_queue_hash != prev_pi.post_msg_queue_hash {
+         return Err(ValidationError::MsgQueueHashMismatch);
+     }
+
+     if fork_name == ForkName::EuclidV1 {
+         if self.prev_msg_queue_hash != B256::ZERO {
+             return Err(ValidationError::NonZeroMsgQueueHash("current.prev_msg_queue_hash"));
+         }
+         if prev_pi.prev_msg_queue_hash != B256::ZERO {
+             return Err(ValidationError::NonZeroMsgQueueHash("prev.prev_msg_queue_hash"));
+         }
+         if self.post_msg_queue_hash != B256::ZERO {
+             return Err(ValidationError::NonZeroMsgQueueHash("current.post_msg_queue_hash"));
+         }
+         if prev_pi.post_msg_queue_hash != B256::ZERO {
+             return Err(ValidationError::NonZeroMsgQueueHash("prev.post_msg_queue_hash"));
+         }
+     }
+     
+     Ok(())
}

This would require defining a ValidationError enum in the crate. I'm not suggesting this change if the current contract is to panic on validation failures, but it might be worth verifying if there are any callers that would benefit from more graceful error handling:


🏁 Script executed:

#!/bin/bash
# Search for calls to BatchInfo::validate to understand usage patterns
rg -A 2 "\.validate\s*\("

Length of output: 664


Review validate behavior and impact on callers

The validate method is currently used as an internal invariant check (panicking on failure) in multiple places. Converting it to return Result<…, ValidationError> would be a breaking change requiring updates to every callsite:

  • zkvm-prover/integration/src/utils/mod.rs
    • Payload::from(&enveloped).validate(header, &chunk_infos);
    • PayloadV7::from(&enveloped).validate(task.batch_header.must_v7_header(), &chunk_infos);
  • common/types-rs/base/src/public_inputs.rs
    • self.0.validate(&prev_pi.0, self.1)

If you need recoverable validation errors, you’ll have to:

  1. Define a ValidationError enum in this crate.
  2. Change fn validate(&…) to return Result<(), ValidationError>.
  3. Update all callers to propagate or handle the Result.

Otherwise, preserving panics via assert_eq! may be acceptable for guarding internal invariants. Please review and decide on the desired error-handling strategy.

zkvm-prover/bin/Cargo.toml (1)

8-20: Consider specific version constraints for url dependency.

The url dependency uses >=2.5.3 which could allow any future version that might introduce breaking changes. Consider using a more specific version constraint like 2.5.3, or a compatible range like ^2.5.3.

-url = ">=2.5.3"
+url = "^2.5.3"
common/types-rs/batch/src/witness.rs (2)

5-6: Clarify the comment about unnecessary dependency.

The comment "Simply rewrap byte48 to avoid unnecessary dep" could be more specific about which dependency is being avoided and why this approach was chosen.

-/// Simply rewrap byte48 to avoid unnecessary dep
+/// Type alias for a 48-byte array used for KZG commitments and proofs.
+/// This avoids depending directly on the kzg crate or other cryptographic libraries.

8-18: Consider using newtype pattern for better type safety.

While the Bytes48 type alias works, using a newtype pattern would provide better type safety by preventing accidental use of raw byte arrays where KZG commitments or proofs are expected.

-/// Witness required by applying point evaluation
-#[derive(Clone, Debug, rkyv::Archive, rkyv::Deserialize, rkyv::Serialize)]
-#[rkyv(derive(Debug))]
-pub struct PointEvalWitness {
-    /// kzg commitment
-    #[rkyv()]
-    pub kzg_commitment: Bytes48,
-    /// kzg proof
-    #[rkyv()]
-    pub kzg_proof: Bytes48,
-}
+/// A KZG commitment represented as 48 bytes
+#[derive(Clone, Debug, rkyv::Archive, rkyv::Deserialize, rkyv::Serialize)]
+#[rkyv(derive(Debug))]
+pub struct KzgCommitment(Bytes48);
+
+impl KzgCommitment {
+    pub fn as_bytes(&self) -> &[u8; 48] {
+        &self.0
+    }
+}
+
+/// A KZG proof represented as 48 bytes
+#[derive(Clone, Debug, rkyv::Archive, rkyv::Deserialize, rkyv::Serialize)]
+#[rkyv(derive(Debug))]
+pub struct KzgProof(Bytes48);
+
+impl KzgProof {
+    pub fn as_bytes(&self) -> &[u8; 48] {
+        &self.0
+    }
+}
+
+/// Witness required by applying point evaluation
+#[derive(Clone, Debug, rkyv::Archive, rkyv::Deserialize, rkyv::Serialize)]
+#[rkyv(derive(Debug))]
+pub struct PointEvalWitness {
+    /// kzg commitment
+    #[rkyv()]
+    pub kzg_commitment: KzgCommitment,
+    /// kzg proof
+    #[rkyv()]
+    pub kzg_proof: KzgProof,
+}
common/types-rs/base/src/public_inputs.rs (1)

1-14: Consider returning Result in validate method.

The validate method in the PublicInputs trait doesn't specify how validation failures should be handled. Consider changing the method signature to return a Result type for better error handling.

pub trait PublicInputs {
    /// Keccak-256 digest of the public inputs. The public-input hash are revealed as public values
    /// via [`openvm::io::reveal`].
    fn pi_hash(&self) -> B256;

    /// Validation logic between public inputs of two contiguous instances.
-   fn validate(&self, prev_pi: &Self);
+   fn validate(&self, prev_pi: &Self) -> Result<(), ValidationError>;
}
common/types-rs/aggregation/src/lib.rs (2)

71-74: Clarify confusing documentation

The comment "Note that the actual value for each u32 is a byte" is confusing. If each value is a byte, why is it stored as a u32? This needs clarification to avoid misunderstandings.

-/// Note that the actual value for each u32 is a byte.
+/// Note: While stored as u32 values, each element effectively represents a single byte value
+/// in the context of these public inputs.

76-81: Consider adding more documentation to ProofCarryingWitness trait

This trait is essential for the aggregation workflow, but it lacks detailed documentation about the expected behavior and implementation requirements.

Consider adding more comprehensive documentation about:

  1. When this trait should be implemented
  2. What the returned Vec of AggregationInput represents in the context of proof aggregation
  3. Any requirements on the ordering or structure of the returned proofs
common/types-rs/batch/src/header/v7.rs (3)

55-65: Optimize memory allocation in batch_hash method

The current implementation creates multiple temporary allocations, including a vector just for a single byte (vec![self.version]). This could be optimized for better performance.

-    fn batch_hash(&self) -> B256 {
-        keccak256(
-            std::iter::empty()
-                .chain(vec![self.version].as_slice())
-                .chain(self.batch_index.to_be_bytes().as_slice())
-                .chain(self.blob_versioned_hash.as_slice())
-                .chain(self.parent_batch_hash.as_slice())
-                .cloned()
-                .collect::<Vec<u8>>(),
-        )
+    fn batch_hash(&self) -> B256 {
+        let mut bytes = Vec::with_capacity(1 + 8 + 32 + 32);
+        bytes.push(self.version);
+        bytes.extend_from_slice(&self.batch_index.to_be_bytes());
+        bytes.extend_from_slice(self.blob_versioned_hash.as_slice());
+        bytes.extend_from_slice(self.parent_batch_hash.as_slice());
+        keccak256(bytes)
+    }

81-94: Apply the same optimization to ArchivedBatchHeaderV7's batch_hash method

The archived version of the batch_hash method has the same inefficient memory allocation pattern as the non-archived version.

-    fn batch_hash(&self) -> B256 {
-        let batch_index: u64 = self.batch_index.into();
-        let blob_versioned_hash: B256 = self.blob_versioned_hash.into();
-        let parent_batch_hash: B256 = self.parent_batch_hash.into();
-        keccak256(
-            std::iter::empty()
-                .chain(vec![self.version].as_slice())
-                .chain(batch_index.to_be_bytes().as_slice())
-                .chain(blob_versioned_hash.as_slice())
-                .chain(parent_batch_hash.as_slice())
-                .cloned()
-                .collect::<Vec<u8>>(),
-        )
+    fn batch_hash(&self) -> B256 {
+        let batch_index: u64 = self.batch_index.into();
+        let blob_versioned_hash: B256 = self.blob_versioned_hash.into();
+        let parent_batch_hash: B256 = self.parent_batch_hash.into();
+        
+        let mut bytes = Vec::with_capacity(1 + 8 + 32 + 32);
+        bytes.push(self.version);
+        bytes.extend_from_slice(&batch_index.to_be_bytes());
+        bytes.extend_from_slice(blob_versioned_hash.as_slice());
+        bytes.extend_from_slice(parent_batch_hash.as_slice());
+        keccak256(bytes)
+    }

34-45: Consider caching the hash calculation

The batch hash is likely to be calculated multiple times but won't change for a given instance. Consider caching the result on first calculation.

This would require adding a cached_hash field to the struct that's populated lazily when batch_hash() is first called.

zkvm-prover/Makefile (2)

29-35: Uncomment or remove commented-out clippy targets

There are several commented-out clippy commands. Either uncomment them if they're needed or remove them for clarity. Commented code can lead to confusion about what's actually being checked.

If these targets are meant to be commented out temporarily, add a comment explaining why and when they should be re-enabled.


1-8: Consider documenting the RUST_MIN_STACK and RUST_LOG environment variables

These environment variables significantly affect the behavior of the code, but there's no documentation about their purpose or optimal values.

Add comments explaining what these variables control and provide guidance on appropriate values for different scenarios (e.g., development, production, debugging).

zkvm-prover/integration/tests/chunk_circuit.rs (1)

43-44: Remove commented code in test_cycle function

The commented-out line importing rayon is not needed and should be removed to maintain clean code.

-    // use rayon::prelude::*;
zkvm-prover/integration/src/testers/bundle.rs (2)

26-38: Hard-coded proof filenames make the test fragile and hard to reuse

The two batch-proof JSON files are baked into the implementation.
If the test-data directory changes (e.g. new fork, new fixtures, CI pruning) the tester will panic long before the compiler can help.

Consider parameterising the filenames (e.g. via env‐vars or a small helper that scans proofs/*.json) so that adding or replacing fixtures needs no code changes.


22-25: PATH_PROJECT_ROOT is relative to the integration crate – brittle in workspaces

Using "./../build/bundle-circuit" assumes the working directory is always the integration crate root.
Cargo executes tests from the workspace root, so this path happens to work today but will break if the crate is moved.

Prefer env!("CARGO_MANIFEST_DIR") or Path::join on std::env::current_dir()? to build an absolute path.

common/types-rs/chunk/src/execute.rs (1)

164-165: LazyLock::get(&MAINNET) actually touches the global

Although the assertion is meant to ensure the MAINNET spec is never initialised, calling get() still marks the LazyLock as visited and may trigger initialisation if another thread races here.

Use LazyLock::get_mut() inside a cfg(test) guard or drop the check; the extra safety is not worth the potential side-effect.

common/types-rs/batch/src/header/v6.rs (2)

75-91: Avoid repeated heap allocation when computing batch_hash

std::iter::empty().chain(...).collect::<Vec<u8>>() allocates a new Vec every call – this runs on every hash verification.

Use tiny_keccak::Keccak (already a dep in alloy-primitives) or keccak256! macros to feed slices without an intermediate buffer.

-        keccak256(
-            std::iter::empty()
-                // many chain() calls …
-                .cloned()
-                .collect::<Vec<u8>>(),
-        )
+        keccak256(&[
+            &[self.version],
+            &self.batch_index.to_be_bytes(),
+            &self.l1_message_popped.to_be_bytes(),
+            &self.total_l1_message_popped.to_be_bytes(),
+            self.data_hash.as_slice(),
+            self.blob_versioned_hash.as_slice(),
+            self.parent_batch_hash.as_slice(),
+            &self.last_block_timestamp.to_be_bytes(),
+            self.blob_data_proof[0].as_slice(),
+            self.blob_data_proof[1].as_slice(),
+        ]
+        .concat())

94-132: Duplicated hashing logic between archived and live headers

The archived impl re-copies the entire concatenation logic; any future change must be duplicated manually and risks divergence.

Factor the common code into a small fn hash_inner(fields: impl Iterator<Item=&[u8]>) -> B256 and call it from both impls.

zkvm-prover/bin/src/zk_circuits_handler.rs (2)

53-60: Propagate deserialization context in error message

When serde_json::from_str fails, the current ? loses the original input context, making debugging hard.
Wrap the error with additional context:

-        let chunk_task: ChunkTaskDetail = serde_json::from_str(&input)?;
+        let chunk_task: ChunkTaskDetail =
+            serde_json::from_str(&input).map_err(|e| anyhow::anyhow!("invalid ChunkTaskDetail JSON: {e}; raw={input}"))?;

67-78: Consider parallelising RPC calls for large chunks

The loop fetches blocks & witnesses sequentially; proving a big chunk (dozens of blocks) will wait on N × RTT.
Using futures::stream::iter(block_hashes).map(...).buffer_unordered(N) would keep the logic simple while overlapping I/O.
Not mandatory for correctness, but will noticeably shorten preprocessing latency.

zkvm-prover/integration/tests/bundle_circuit.rs (1)

118-120: Test over-constraints on expected chunk count

assert_eq!(chunk_tasks.len(), 3); makes the test brittle—any change in fixture size will fail CI even when the prover works.
Prefer asserting > 0 or fetching the expected count dynamically.

zkvm-prover/integration/src/lib.rs (1)

104-108: Avoid allocating String for constant file-name

fd_app_exe is called frequently yet always returns the same literal. Returning &'static str removes repeated heap allocations.

-    fn fd_app_exe() -> String {
-        FD_APP_EXE.to_string()
+    fn fd_app_exe() -> &'static str {
+        FD_APP_EXE
     }
zkvm-prover/integration/src/utils/mod.rs (1)

388-391: unwrap() will panic inside tests – prefer ? to propagate errors

Using unwrap() in test helpers hides the real error source and stops the entire test suite on the first failure.

-    write_json(path_testdata.join("batch-task-test-out.json"), &task).unwrap();
+    write_json(path_testdata.join("batch-task-test-out.json"), &task)?;
zkvm-prover/integration/src/testers/chunk.rs (1)

47-55: TRACE_PATH handling ignores multi-pattern inputs

When TRACE_PATH contains multiple comma-separated patterns (e.g., "*.json,*.zst"), the current code passes the whole string to glob, producing no matches. Splitting on , (or platform path separator) would make the env-var far more ergonomic.

common/types-rs/base/src/public_inputs/chunk.rs (5)

50-58: Add an error message to ease debugging (and consider debug_assert!).

assert_eq!(bytes.len(), SIZE_BLOCK_CTX); will panic with a generic message.
A descriptive assertion (or a debug_assert!) makes it much easier to trace malformed input slices, especially when they come from deserialised network payloads.

-assert_eq!(bytes.len(), SIZE_BLOCK_CTX);
+assert_eq!(
+    bytes.len(),
+    SIZE_BLOCK_CTX,
+    "BlockContextV2::from: expected {SIZE_BLOCK_CTX}-byte slice, got {} bytes",
+    bytes.len()
+);

70-80: Avoid two rounds of allocation in to_bytes().

to_bytes() currently chains iterators and finally collects into a Vec<u8>.
When the caller immediately appends the result into another buffer (as done in
pi_hash_euclidv2) this leads to an avoidable extra allocation & copy.

Consider filling a [u8; SIZE_BLOCK_CTX] inline or writing into a supplied
Vec<u8>/&mut Vec<u8> to eliminate the second allocation.
Example:

 pub fn to_bytes(&self) -> [u8; SIZE_BLOCK_CTX] {
-    std::iter::empty()
-        .chain(self.timestamp.to_be_bytes())
-        .chain(self.base_fee.to_be_bytes::<32>())
-        ...
-        .collect()
+    let mut buf = [0u8; SIZE_BLOCK_CTX];
+    buf[..8].copy_from_slice(&self.timestamp.to_be_bytes());
+    buf[8..40].copy_from_slice(&self.base_fee.to_be_bytes::<32>());
+    ...
+    buf
 }

146-155: Eliminate redundant Vec allocation in pi_hash_euclidv1().

keccak256 already accepts something that can be borrowed as a byte slice.
Collecting into a Vec<u8> and then passing it immediately wastes memory.

 keccak256(
-    std::iter::empty()
+    &std::iter::empty()
         .chain(&self.chain_id.to_be_bytes())
         ...
-        .cloned()
-        .collect::<Vec<u8>>(),
+        .cloned()
+        .collect::<Vec<u8>>()          // still needed as we build an owned buffer
 )

(Alternatively introduce a keccak256_iter helper that writes directly into
the hasher without any intermediate buffer.)


108-114: Minor docs/grammar nit – “definition” is misspelled.

/// It is a legacy field and can be omitted in new defination
definition


240-246: validate() duplicates zero-checks already implied by earlier assertions.

You first assert
self.prev_msg_queue_hash == prev_pi.post_msg_queue_hash
and then for EuclidV1 assert each of those fields is ZERO.
The second set of assertions is sufficient; the first one is redundant for
the EuclidV1 path and wastes cycles. Consider gating the chaining
assertion to EuclidV2 only.

-assert_eq!(self.prev_msg_queue_hash, prev_pi.post_msg_queue_hash);
-
-// message queue hash is used only after euclidv2
-if fork_name == ForkName::EuclidV1 {
+if fork_name == ForkName::EuclidV2 {
+    assert_eq!(self.prev_msg_queue_hash, prev_pi.post_msg_queue_hash);
+} else {
     ...
 }
zkvm-prover/prover/src/proof/mod.rs (2)

61-77: Guard against malformed instances length before slicing.

assert_eq!(value.instances.len() % 32, 0, "...") ensures 32-byte alignment,
but we also rely on there being at least one chunk. Add a length check to
avoid collect::<Vec<Fr>>() on an empty iterator which would later surface
as a cryptic prover error.

 assert_eq!(
     value.instances.len() % 32,
     0,
     "expect len(instances) % 32 == 0"
 );
+assert!(
+    !value.instances.is_empty(),
+    "EvmProof.instances must contain at least one 32-byte field"
+);

90-99: Serde remote definition may become outdated – future-proof with #[serde(other)].

If RootProof gains additional fields, deserialisation will fail unless we
explicitly ignore unknown fields.

 #[derive(Serialize, Deserialize)]
 #[serde(remote = "RootProof")]
 struct RootProofDef {
+    #[serde(other)]
+    _ignore: (),
     ...
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between 0d8b00c and 9c5c29e.

⛔ Files ignored due to path filters (2)
  • Cargo.lock is excluded by !**/*.lock
  • zkvm-prover/integration/testdata/verifier/verifier.bin is excluded by !**/*.bin
📒 Files selected for processing (107)
  • .dockerignore (1 hunks)
  • Cargo.toml (1 hunks)
  • build/.cargo/config.toml (1 hunks)
  • common/types-rs/Cargo.toml (1 hunks)
  • common/types-rs/README.md (1 hunks)
  • common/types-rs/aggregation/Cargo.toml (1 hunks)
  • common/types-rs/aggregation/src/lib.rs (1 hunks)
  • common/types-rs/base/Cargo.toml (1 hunks)
  • common/types-rs/base/src/lib.rs (1 hunks)
  • common/types-rs/base/src/public_inputs.rs (1 hunks)
  • common/types-rs/base/src/public_inputs/batch.rs (1 hunks)
  • common/types-rs/base/src/public_inputs/bundle.rs (1 hunks)
  • common/types-rs/base/src/public_inputs/chunk.rs (1 hunks)
  • common/types-rs/base/src/utils/hash.rs (1 hunks)
  • common/types-rs/base/src/utils/mod.rs (1 hunks)
  • common/types-rs/batch/Cargo.toml (1 hunks)
  • common/types-rs/batch/src/header/mod.rs (1 hunks)
  • common/types-rs/batch/src/header/v6.rs (1 hunks)
  • common/types-rs/batch/src/header/v7.rs (1 hunks)
  • common/types-rs/batch/src/lib.rs (1 hunks)
  • common/types-rs/batch/src/payload/mod.rs (1 hunks)
  • common/types-rs/batch/src/payload/v6.rs (1 hunks)
  • common/types-rs/batch/src/payload/v7.rs (1 hunks)
  • common/types-rs/batch/src/witness.rs (1 hunks)
  • common/types-rs/bundle/Cargo.toml (1 hunks)
  • common/types-rs/bundle/src/lib.rs (1 hunks)
  • common/types-rs/bundle/src/witness.rs (1 hunks)
  • common/types-rs/chunk/Cargo.toml (1 hunks)
  • common/types-rs/chunk/src/execute.rs (1 hunks)
  • common/types-rs/chunk/src/lib.rs (1 hunks)
  • common/types-rs/chunk/src/public_inputs.rs (1 hunks)
  • common/types-rs/chunk/src/utils.rs (1 hunks)
  • common/types-rs/chunk/src/witness.rs (1 hunks)
  • common/types-rs/src/lib.rs (1 hunks)
  • common/types/message/bundle-proof-sample.json (1 hunks)
  • rust-toolchain (1 hunks)
  • rustfmt.toml (1 hunks)
  • zkvm-prover/Cargo.toml (0 hunks)
  • zkvm-prover/Dockerfile (1 hunks)
  • zkvm-prover/Makefile (1 hunks)
  • zkvm-prover/README.md (1 hunks)
  • zkvm-prover/bin/Cargo.toml (1 hunks)
  • zkvm-prover/bin/Makefile (1 hunks)
  • zkvm-prover/bin/src/main.rs (1 hunks)
  • zkvm-prover/bin/src/prover.rs (5 hunks)
  • zkvm-prover/bin/src/types.rs (1 hunks)
  • zkvm-prover/bin/src/zk_circuits_handler.rs (1 hunks)
  • zkvm-prover/bin/src/zk_circuits_handler/euclid.rs (1 hunks)
  • zkvm-prover/bin/src/zk_circuits_handler/euclidV2.rs (1 hunks)
  • zkvm-prover/build-guest-actions-entrypoint.sh (1 hunks)
  • zkvm-prover/build-guest.sh (1 hunks)
  • zkvm-prover/build/.gitignore (1 hunks)
  • zkvm-prover/build/batch-circuit/batch_exe_commit.rs (1 hunks)
  • zkvm-prover/build/batch-circuit/batch_leaf_commit.rs (1 hunks)
  • zkvm-prover/build/batch-circuit/openvm.toml (1 hunks)
  • zkvm-prover/build/bundle-circuit/bundle_exe_commit.rs (1 hunks)
  • zkvm-prover/build/bundle-circuit/bundle_exe_euclidv1_commit.rs (1 hunks)
  • zkvm-prover/build/bundle-circuit/bundle_leaf_commit.rs (1 hunks)
  • zkvm-prover/build/bundle-circuit/openvm.toml (1 hunks)
  • zkvm-prover/build/chunk-circuit/chunk_exe_commit.rs (1 hunks)
  • zkvm-prover/build/chunk-circuit/chunk_exe_rv32_commit.rs (1 hunks)
  • zkvm-prover/build/chunk-circuit/chunk_leaf_commit.rs (1 hunks)
  • zkvm-prover/build/chunk-circuit/openvm.toml (1 hunks)
  • zkvm-prover/download-release.sh (1 hunks)
  • zkvm-prover/integration/.gitignore (1 hunks)
  • zkvm-prover/integration/Cargo.toml (1 hunks)
  • zkvm-prover/integration/src/lib.rs (1 hunks)
  • zkvm-prover/integration/src/testers/batch.rs (1 hunks)
  • zkvm-prover/integration/src/testers/bundle.rs (1 hunks)
  • zkvm-prover/integration/src/testers/chunk.rs (1 hunks)
  • zkvm-prover/integration/src/testers/mod.rs (1 hunks)
  • zkvm-prover/integration/src/utils/mod.rs (1 hunks)
  • zkvm-prover/integration/testdata/flatten-trace.py (1 hunks)
  • zkvm-prover/integration/testdata/phase1/proofs/.gitignore (1 hunks)
  • zkvm-prover/integration/testdata/phase1/proofs/README.md (1 hunks)
  • zkvm-prover/integration/testdata/phase1/tasks/README.md (1 hunks)
  • zkvm-prover/integration/testdata/phase1/witnesses/12508460.json (1 hunks)
  • zkvm-prover/integration/testdata/phase1/witnesses/12508461.json (1 hunks)
  • zkvm-prover/integration/testdata/phase2/proofs/.gitignore (1 hunks)
  • zkvm-prover/integration/testdata/phase2/proofs/README.md (1 hunks)
  • zkvm-prover/integration/testdata/phase2/tasks/README.md (1 hunks)
  • zkvm-prover/integration/testdata/phase2/witnesses/1.json (1 hunks)
  • zkvm-prover/integration/testdata/phase2/witnesses/2.json (1 hunks)
  • zkvm-prover/integration/testdata/phase2/witnesses/3.json (1 hunks)
  • zkvm-prover/integration/testdata/phase2/witnesses/4.json (1 hunks)
  • zkvm-prover/integration/testdata/phase2/witnesses/5.json (1 hunks)
  • zkvm-prover/integration/testdata/phase2/witnesses/6.json (1 hunks)
  • zkvm-prover/integration/testdata/phase2/witnesses/7.json (1 hunks)
  • zkvm-prover/integration/testdata/phase2/witnesses/8.json (1 hunks)
  • zkvm-prover/integration/testdata/verifier/verifier.sol (1 hunks)
  • zkvm-prover/integration/tests/batch_circuit.rs (1 hunks)
  • zkvm-prover/integration/tests/bundle_circuit.rs (1 hunks)
  • zkvm-prover/integration/tests/chunk_circuit.rs (1 hunks)
  • zkvm-prover/prover/Cargo.toml (1 hunks)
  • zkvm-prover/prover/src/commitments/batch_exe_commit.rs (1 hunks)
  • zkvm-prover/prover/src/commitments/batch_leaf_commit.rs (1 hunks)
  • zkvm-prover/prover/src/commitments/bundle_exe_commit.rs (1 hunks)
  • zkvm-prover/prover/src/commitments/bundle_exe_euclidv1_commit.rs (1 hunks)
  • zkvm-prover/prover/src/commitments/bundle_leaf_commit.rs (1 hunks)
  • zkvm-prover/prover/src/commitments/chunk_exe_commit.rs (1 hunks)
  • zkvm-prover/prover/src/commitments/chunk_exe_rv32_commit.rs (1 hunks)
  • zkvm-prover/prover/src/commitments/chunk_leaf_commit.rs (1 hunks)
  • zkvm-prover/prover/src/commitments/mod.rs (1 hunks)
  • zkvm-prover/prover/src/error.rs (1 hunks)
  • zkvm-prover/prover/src/lib.rs (1 hunks)
  • zkvm-prover/prover/src/proof/mod.rs (1 hunks)
  • zkvm-prover/prover/src/prover/batch.rs (1 hunks)
⛔ Files not processed due to max files limit (30)
  • zkvm-prover/prover/src/prover/bundle.rs
  • zkvm-prover/prover/src/prover/chunk.rs
  • zkvm-prover/prover/src/prover/mod.rs
  • zkvm-prover/prover/src/setup.rs
  • zkvm-prover/prover/src/task/batch.rs
  • zkvm-prover/prover/src/task/bundle.rs
  • zkvm-prover/prover/src/task/chunk.rs
  • zkvm-prover/prover/src/task/mod.rs
  • zkvm-prover/prover/src/utils/mod.rs
  • zkvm-prover/prover/src/utils/vm.rs
  • zkvm-prover/prover/testdata/bundle-proof.json
  • zkvm-prover/release.sh
  • zkvm-prover/rust-toolchain
  • zkvm-prover/rustfmt.toml
  • zkvm-prover/src/zk_circuits_handler.rs
  • zkvm-prover/verifier/Cargo.toml
  • zkvm-prover/verifier/src/commitments/batch_exe_commit.rs
  • zkvm-prover/verifier/src/commitments/batch_leaf_commit.rs
  • zkvm-prover/verifier/src/commitments/bundle_exe_commit.rs
  • zkvm-prover/verifier/src/commitments/bundle_exe_euclidv1_commit.rs
  • zkvm-prover/verifier/src/commitments/bundle_leaf_commit.rs
  • zkvm-prover/verifier/src/commitments/chunk_exe_commit.rs
  • zkvm-prover/verifier/src/commitments/chunk_exe_rv32_commit.rs
  • zkvm-prover/verifier/src/commitments/chunk_leaf_commit.rs
  • zkvm-prover/verifier/src/commitments/mod.rs
  • zkvm-prover/verifier/src/evm.rs
  • zkvm-prover/verifier/src/lib.rs
  • zkvm-prover/verifier/src/verifier.rs
  • zkvm-prover/verifier/testdata/proofs/bundle-proof-phase1.json
  • zkvm-prover/verifier/testdata/proofs/bundle-proof-phase2.json
💤 Files with no reviewable changes (1)
  • zkvm-prover/Cargo.toml
🧰 Additional context used
🧬 Code Graph Analysis (8)
common/types-rs/base/src/utils/mod.rs (1)
common/types-rs/base/src/utils/hash.rs (3)
  • keccak256 (9-17)
  • keccak256_rv32 (19-26)
  • sha256_rv32 (28-35)
zkvm-prover/bin/src/types.rs (1)
common/types/message/message.go (1)
  • ChunkTaskDetail (49-54)
zkvm-prover/bin/src/prover.rs (1)
zkvm-prover/bin/src/zk_circuits_handler.rs (1)
  • create (42-51)
common/types-rs/bundle/src/witness.rs (4)
coordinator/internal/types/block.go (1)
  • BatchInfo (4-8)
common/types-rs/aggregation/src/lib.rs (2)
  • get_proofs (80-80)
  • from (63-68)
common/types-rs/batch/src/witness.rs (1)
  • get_proofs (44-56)
common/types-rs/base/src/public_inputs/batch.rs (1)
  • from (47-58)
common/types-rs/chunk/src/public_inputs.rs (2)
common/types/message/message.go (1)
  • BlockContextV2 (138-144)
common/types-rs/base/src/public_inputs/chunk.rs (3)
  • from (39-47)
  • from (51-67)
  • from (195-213)
common/types-rs/src/lib.rs (2)
coordinator/internal/types/block.go (1)
  • BatchInfo (4-8)
common/types/message/message.go (2)
  • BlockContextV2 (138-144)
  • ChunkInfo (121-135)
zkvm-prover/prover/src/lib.rs (2)
common/types/message/message.go (3)
  • BatchProof (186-189)
  • BundleProof (243-246)
  • ChunkProof (153-155)
coordinator/internal/types/prover.go (1)
  • ProverType (20-20)
zkvm-prover/bin/src/zk_circuits_handler.rs (1)
common/types/message/message.go (2)
  • ProofType (22-22)
  • ChunkTaskDetail (49-54)
🪛 markdownlint-cli2 (0.17.2)
zkvm-prover/integration/testdata/phase2/proofs/README.md

3-3: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3

(MD001, heading-increment)

zkvm-prover/integration/testdata/phase1/tasks/README.md

3-3: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3

(MD001, heading-increment)

zkvm-prover/integration/testdata/phase2/tasks/README.md

3-3: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3

(MD001, heading-increment)

zkvm-prover/README.md

3-3: Emphasis used instead of a heading
null

(MD036, no-emphasis-as-heading)


34-34: Dollar signs used before commands without showing output
null

(MD014, commands-show-output)


42-42: Dollar signs used before commands without showing output
null

(MD014, commands-show-output)


48-48: Dollar signs used before commands without showing output
null

(MD014, commands-show-output)


54-54: Dollar signs used before commands without showing output
null

(MD014, commands-show-output)

🪛 LanguageTool
common/types-rs/README.md

[uncategorized] ~5-~5: This verb does not appear to agree with the subject. Consider using a different form.
Context: ...epted by circuits as input. This crate help decoupling circuits with other crates a...

(AI_EN_LECTOR_REPLACEMENT_VERB_AGREEMENT)


[grammar] ~5-~5: The verb ‘Avoiding’ is used with the gerund form.
Context: ...eir dependencies neat and controllable. Avoiding to involve crates which is not compatible with the...

(ADMIT_ENJOY_VB)


[uncategorized] ~5-~5: This verb does not appear to agree with the subject. Consider using a different form.
Context: ...lable. Avoiding to involve crates which is not compatible with the tootlchain of o...

(AI_EN_LECTOR_REPLACEMENT_VERB_AGREEMENT)


[style] ~5-~5: Consider using “incompatible” to avoid wordiness.
Context: ...le. Avoiding to involve crates which is not compatible with the tootlchain of openvm from indi...

(NOT_ABLE_PREMIUM)

zkvm-prover/integration/testdata/phase1/tasks/README.md

[uncategorized] ~3-~3: A determiner appears to be missing. Consider inserting it.
Context: ...sk file here to execute some tests ### Following tests require batch task (naming as `ba...

(AI_EN_LECTOR_MISSING_DETERMINER)


[uncategorized] ~3-~3: You might be missing the article “a” here.
Context: ...some tests ### Following tests require batch task (naming as batch-task.json) + `t...

(AI_EN_LECTOR_MISSING_DETERMINER_A)


[uncategorized] ~3-~3: This verb may not be in the correct form. Consider using a different form for this context.
Context: ...### Following tests require batch task (naming as batch-task.json) + `test-execute-b...

(AI_EN_LECTOR_REPLACEMENT_VERB_FORM)


[misspelling] ~7-~7: Use “an” instead of ‘a’ if the following word starts with a vowel sound, e.g. ‘an article’, ‘an hour’.
Context: ... be generated into the output dir while a e2e test has completed

(EN_A_VS_AN)

zkvm-prover/integration/testdata/phase2/tasks/README.md

[uncategorized] ~3-~3: A determiner appears to be missing. Consider inserting it.
Context: ...sk file here to execute some tests ### Following tests require batch task (naming as `ba...

(AI_EN_LECTOR_MISSING_DETERMINER)


[uncategorized] ~3-~3: You might be missing the article “a” here.
Context: ...some tests ### Following tests require batch task (naming as batch-task.json) + `t...

(AI_EN_LECTOR_MISSING_DETERMINER_A)


[uncategorized] ~3-~3: This verb may not be in the correct form. Consider using a different form for this context.
Context: ...### Following tests require batch task (naming as batch-task.json) + `test-execute-b...

(AI_EN_LECTOR_REPLACEMENT_VERB_FORM)


[misspelling] ~7-~7: Use “an” instead of ‘a’ if the following word starts with a vowel sound, e.g. ‘an article’, ‘an hour’.
Context: ... be generated into the output dir while a e2e test has completed

(EN_A_VS_AN)

zkvm-prover/README.md

[uncategorized] ~23-~23: Possible missing preposition found.
Context: ...e into our implementation, please refer the interfaces doc....

(AI_HYDRA_LEO_MISSING_TO)


[typographical] ~27-~27: Consider adding a comma here.
Context: ....md) doc. ## Testing For more commands please refer the Makefile. ### ...

(PLEASE_COMMA)


[uncategorized] ~27-~27: Possible missing preposition found.
Context: ...Testing For more commands please refer the Makefile. ### Build Gues...

(AI_HYDRA_LEO_MISSING_TO)

🪛 Hadolint (2.12.0)
zkvm-prover/Dockerfile

[error] 9-9: Use COPY instead of ADD for files and folders

(DL3020)


[error] 10-10: Use COPY instead of ADD for files and folders

(DL3020)


[error] 11-11: Use COPY instead of ADD for files and folders

(DL3020)


[error] 12-12: Use COPY instead of ADD for files and folders

(DL3020)


[error] 13-13: Use COPY instead of ADD for files and folders

(DL3020)

⏰ Context from checks skipped due to timeout of 90000ms (7)
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: check
  • GitHub Check: check
  • GitHub Check: check
  • GitHub Check: tests
🔇 Additional comments (125)
rustfmt.toml (4)

2-2: Rust edition set to 2024.
Targeting the latest edition is great for new syntax and features. Ensure all workspace crates have their Cargo.toml editions aligned.


3-3: Crate-level import grouping.
imports_granularity = "Crate" will organize imports by crate boundary, improving readability.


4-5: Line width and newline style.
max_width = 100 enforces a reasonable line length, and newline_style = "Unix" maintains consistency across platforms.


6-8: Comment formatting flags—please verify.
You’ve enabled normalize_comments = true, set style_edition = "2024", and disabled wrapping (wrap_comments = false). Confirm this aligns with your team’s guidelines for comment style and length.

zkvm-prover/integration/.gitignore (1)

1-1: Approve ignoring transient text artifacts.

The addition of *.txt will prevent temporary text files from cluttering the integration directory. Ensure that only ephemeral .txt outputs are generated here and that no essential test fixtures rely on .txt files under version control.

zkvm-prover/integration/testdata/phase2/proofs/.gitignore (1)

1-1: Approve ignoring generated proof JSON files.

The *.json rule correctly excludes transient proof artifacts in phase2. Confirm that any committed JSON fixtures (e.g., canonical test vectors) are stored outside this directory hierarchy.

zkvm-prover/integration/testdata/phase1/proofs/.gitignore (1)

1-1: Approve ignoring generated proof JSON files for phase1.

This rule cleanly excludes auto-generated JSON proofs in phase1. Make sure that any reference fixtures remain under a separate, tracked directory if needed.

zkvm-prover/prover/src/commitments/bundle_exe_commit.rs (1)

1-1: Approve symlink for bundle circuit commitment constants.

The symlink target (../../../build/bundle-circuit/bundle_exe_commit.rs) correctly points from prover/src/commitments to the generated build output. This preserves a clear separation between hand-written code and auto-generated constants.

zkvm-prover/prover/src/commitments/bundle_exe_euclidv1_commit.rs (1)

1-1: Approve symlink for EuclidV1 bundle circuit commitment constants.

Consistent with the existing pattern, this symlink ensures the EuclidV1 constants are sourced from the build directory. Paths and naming align with the rest of the commitment modules.

zkvm-prover/prover/src/commitments/batch_exe_commit.rs (1)

1-5: Approve symlink commit constants file
This file correctly serves as a symbolic link to the generated batch_exe_commit.rs under build/. Ensuring it remains committed as a symlink avoids duplication of generated content.

zkvm-prover/build/.gitignore (1)

1-3: .gitignore patterns look correct
The ignore rules for *.vmexe, root_verifier.asm, and bundle-circuit/digest_* appropriately clean up temporary build artifacts.

zkvm-prover/prover/src/commitments/batch_leaf_commit.rs (1)

1-4: Symlink and constant declaration are correct
The symbolic link to ../../../build/batch-circuit/batch_leaf_commit.rs correctly references the auto-generated commitment file, and the COMMIT array of eight u32 values matches the expected parameters for batch leaf commitments.

zkvm-prover/prover/src/commitments/chunk_exe_rv32_commit.rs (1)

1-4: Commitment mapping via symlink is accurate
The relative path ../../../build/chunk-circuit/chunk_exe_rv32_commit.rs points to the generated constants as intended, and the COMMIT array signature aligns with other circuit commitment modules.

common/types-rs/bundle/src/lib.rs (1)

1-2: Public re-exports of witness types are clear
Introducing mod witness and re-exporting ArchivedBundleWitness and BundleWitness provides a clean API. Confirm that witness.rs includes adequate documentation for these types.

rust-toolchain (1)

1-4: Rust toolchain configuration is properly specified
Pinning to nightly-2025-02-14 with the specified targets and components supports the multi-crate workspace build. Verify in your CI and Docker setup that this nightly release is available and matches your environment requirements.

zkvm-prover/prover/src/commitments/chunk_exe_commit.rs (1)

1-4: Commitment symlink and array declaration are correct
The symlink to ../../../build/chunk-circuit/chunk_exe_commit.rs accurately references the generated commitment constants, and the COMMIT array follows the expected format for chunk execution commitments.

zkvm-prover/prover/src/commitments/chunk_leaf_commit.rs (1)

1-1: Symlink inclusion looks correct.
The new file is a symbolic link pointing to the generated build artifact at build/chunk-circuit/chunk_leaf_commit.rs, following the same pattern as other commitment constants in this directory.

zkvm-prover/prover/src/commitments/bundle_leaf_commit.rs (1)

1-1: Symlink inclusion looks correct.
This file mirrors the build-generated bundle_leaf_commit.rs under zkvm-prover/build/bundle-circuit/, making the bundle leaf commitment available in the prover crate.

zkvm-prover/build/bundle-circuit/bundle_leaf_commit.rs (1)

1-4: Auto-generated commitment constant is in place.
The #![cfg_attr(rustfmt, rustfmt_skip)] attribute and // Generated by crates/build-guest. DO NOT EDIT! notice are correct for an auto-generated file. The COMMIT array matches the expected format.

zkvm-prover/integration/testdata/phase1/tasks/README.md (1)

4-5: List items are clear.
The test names are properly specified and formatted.

zkvm-prover/build/bundle-circuit/bundle_exe_euclidv1_commit.rs (1)

1-4: Approve auto-generated commitment constant
This file is generated by crates/build-guest and defines the EuclidV1 bundle executable COMMIT constant. No manual edits are required.

zkvm-prover/build/batch-circuit/batch_exe_commit.rs (1)

1-4: Approve auto-generated commitment constant
This file is generated by crates/build-guest and defines the batch executable COMMIT constant. No manual edits are required.

zkvm-prover/build/batch-circuit/batch_leaf_commit.rs (1)

1-4: Approve auto-generated commitment constant
This file is generated by crates/build-guest and defines the batch leaf circuit COMMIT constant. No manual edits are required.

zkvm-prover/build/chunk-circuit/chunk_exe_commit.rs (1)

1-4: Approve auto-generated commitment constant
This file is generated by crates/build-guest and defines the chunk executable COMMIT constant. No manual edits are required.

zkvm-prover/bin/src/main.rs (1)

6-9: Module declarations and local imports are well organized
External crates are imported first, followed by local module declarations (mod prover;, mod types;, mod zk_circuits_handler;), and then the use prover::{LocalProver, LocalProverConfig} statement. This ordering enhances clarity by separating external dependencies from internal modules.

common/types-rs/base/src/lib.rs (1)

1-2: Expose core modules from the base crate
The addition of pub mod public_inputs; and pub mod utils; correctly makes these modules available to downstream crates in the workspace, supporting the new aggregation, batch, bundle, and chunk circuit implementations.

zkvm-prover/build/chunk-circuit/chunk_exe_rv32_commit.rs (1)

1-4: Auto-generated execution commit constants are correct
The rustfmt_skip directive, generation comment, and the COMMIT array definition conform to the build-guest tooling expectations. No manual changes are required in this file.

zkvm-prover/build/chunk-circuit/chunk_leaf_commit.rs (1)

1-4: Auto-generated leaf commit constants are correct
This file follows the same format as the execution commit file: it skips formatting, includes the generation warning, and defines the COMMIT constant. Everything aligns with the generator’s output.

.dockerignore (2)

1-6: Ignoring version control and documentation artifacts
Patterns for .github, .gitignore, .dockerignore, .output, and docs will effectively reduce the Docker build context size by excluding unnecessary files.


10-16: Ignoring build output directories
Patterns for .openvm-clippy and target directories correctly prevent compiled artifacts and local build artifacts from being sent to Docker, optimizing the context.

zkvm-prover/build/bundle-circuit/bundle_exe_commit.rs (1)

1-4: Auto-generated commit constant.
This file correctly defines the COMMIT array and uses the rustfmt_skip directive to avoid manual reformatting. As it’s machine-generated, no manual edits are needed here.

common/types-rs/base/src/utils/mod.rs (1)

1-2: Re-export cryptographic hash utilities.
The hash submodule is cleanly exposed through these re-exports, providing a unified API for keccak256, keccak256_rv32, and sha256_rv32.

zkvm-prover/integration/src/testers/mod.rs (2)

1-5: Approve module declarations for testers.
Declaring batch, bundle, and chunk as public submodules centralizes the integration test helpers as intended.


7-8: Verify test data path resolution.
Using a relative path ("./testdata") may break if tests run from a different working directory. Please confirm that PATH_TESTDATA resolves correctly in CI and local runs, or consider deriving it from env!("CARGO_MANIFEST_DIR").

zkvm-prover/bin/src/zk_circuits_handler/euclidV2.rs (3)

3-3: Consistent import of Phase.
Importing Phase from the euclid submodule aligns with the refactored module layout. Ensure that Phase::EuclidV2 is correctly defined and re-exported in euclid.rs.


6-6: External proving SDK dependency.
The scroll_proving_sdk crate is now the source for ProofType and ProveRequest. Verify that this dependency is declared in Cargo.toml with the intended version and that no leftover internal SDK copy remains in the workspace.


9-9: Task imports are concise.
Grouping the batch, bundle, and chunk task types under a single task::{...} import improves readability. Confirm that each of these task modules is publicly exposed by the scroll_zkvm_prover crate.

common/types-rs/Cargo.toml (2)

1-8: Workspace inheritance for package metadata.
Using authors.workspace = true, edition.workspace = true, etc., ensures consistent metadata across workspace members. Please confirm that your minimum Cargo version supports workspace defaults for these fields.


10-15: Local path dependencies.
The path-based dependencies appear correct, with explicit package names matching each types crate. Verify that the directory structure corresponds to these paths and that each scroll-zkvm-circuit-input-types-* crate’s Cargo.toml declares the matching name.

zkvm-prover/build/bundle-circuit/openvm.toml (1)

1-5: FRI parameters configuration looks correct
The values for log_blowup, log_final_poly_len, num_queries, and proof_of_work_bits align with typical security/performance trade-offs for an FRI protocol.

common/types-rs/chunk/src/lib.rs (1)

1-12: Nightly feature usage & module re-exports

  • The crate enables #![feature(lazy_get)], which requires a nightly Rust toolchain. Please confirm the CI/build environment is configured accordingly or consider a stable alternative if needed.
  • The mod declarations and pub use exports create a clean façade API for the chunk types.
common/types-rs/base/Cargo.toml (1)

10-18: Dependency consistency & workspace patterns

  • The pinned versions of sha3 and sha2 ensure reproducible builds.
  • Verify that this crate is listed under members in the root Cargo.toml.
  • If needed, add default-features = false to workspace dependencies to minimize feature bloat.
common/types-rs/bundle/Cargo.toml (4)

1-9: Package configuration looks appropriate for Rust workspace structure.

The package metadata is well-configured with a descriptive name that follows the project's naming convention and appropriately uses workspace inheritance for common fields.


10-15: Dependencies configuration follows best practices.

The dependencies are properly configured with workspace references where appropriate, and the alloy-primitives dependency correctly specifies required features while disabling default features.


17-18: Path dependencies are correctly configured with explicit package names.

Good practice to explicitly specify the package names for path dependencies to avoid confusion when the directory name differs from the package name.


20-21: Features section is minimally defined.

The default features list is empty, which is appropriate if no additional features are needed by default.

common/types/message/bundle-proof-sample.json (1)

1-1: Bundle proof sample JSON structure is well-formed.

This sample file provides a complete example of a bundle proof with all required fields (metadata, proof, verification key, and git version). This will be useful for testing and documentation purposes.

common/types-rs/batch/Cargo.toml (3)

1-9: Package configuration follows the same pattern as the bundle crate.

The batch crate's configuration is consistent with the bundle crate, maintaining a uniform structure across the workspace.


10-18: Dependencies are identical to the bundle crate.

Both batch and bundle crates depend on the same set of workspace crates and local path dependencies, which suggests they have similar requirements and functionality patterns.


20-21: Features configuration matches the bundle crate.

Maintaining consistent feature configuration across related crates is good practice.

zkvm-prover/bin/src/prover.rs (4)

1-3: Updated imports for the circuit handlers.

The imports now include RequestPreHandler, RpcConfig, and specific handlers for different Euclid versions, which aligns with the refactoring goals.


26-29: Added RPC configuration to LocalProverConfig.

The rpc_config field has been added to support RPC-based pre-handling of proof requests, which will enable dynamic task construction.


52-59: Added pre_handler field to LocalProver struct.

The optional pre_handler field allows for lazy initialization of the request pre-handler, which is appropriate for this use case.


137-146: Updated constructor initializes pre_handler as None.

The new method properly initializes the new field as None, which will be lazily initialized later when needed.

build/.cargo/config.toml (1)

1-6: Local path patches are correctly configured for Docker builds
The ../common/types-rs/* paths align with where common/types-rs is copied in the container. No issues detected.

zkvm-prover/build/chunk-circuit/openvm.toml (1)

1-54: Configuration file reviewed and validated
The FRI parameters and VM module settings are well-structured and syntactically correct. No changes needed.

zkvm-prover/integration/testdata/phase2/witnesses/3.json (1)

1-61: Integration test data appears valid
The JSON is well-formed and consistent with other phase2 witnesses. No schema or formatting issues found.

common/types-rs/chunk/src/public_inputs.rs (1)

9-27: Implement From trait for BlockContextV2 - implementation looks good

The implementation correctly extracts and transforms the required fields from a RecoveredBlock to create a BlockContextV2.

zkvm-prover/integration/testdata/phase2/witnesses/1.json (1)

1-61: Test data structure appears well-formed

This JSON file provides comprehensive blockchain state data for testing, including block header, transaction, and state information. The structure matches expected Ethereum block data format, which is appropriate for zkVM prover integration tests.

zkvm-prover/bin/src/types.rs (3)

1-1: Appropriate import for hash primitive

The use of alloy::primitives::B256 aligns with the refactoring plan to use more modular primitives.


12-13: Good type alias for hash primitive

Creating a CommonHash type alias improves code readability and makes future type changes easier.


14-19: New ChunkTaskDetail struct aligns with Go counterpart

The new struct correctly matches its Go counterpart from common/types/message/message.go, with corresponding fields for block hashes, previous message queue hash, and fork name. This consistency ensures proper interoperability between Rust and Go components.

zkvm-prover/integration/testdata/phase2/witnesses/5.json (2)

1-77: Test data structure appears well-formed

This JSON file provides comprehensive blockchain state data for testing, including block header, transaction with authorization list, and state information. The structure is consistent with expected Ethereum block data format, appropriate for zkVM prover integration tests.


35-46: Notable test case for authorization list

The inclusion of an authorization list suggests this test data is specifically designed to test EIP-4337 (account abstraction) or similar functionality. This is a good test case to include in the integration test suite.

zkvm-prover/integration/testdata/phase2/witnesses/2.json (1)

22-175: Validate JSON schema fields against parser expectations
The test fixture uses "transaction" as the array key, whereas many Ethereum JSON schemas use "transactions". Also, "withdrawals": null might be treated differently than an empty array. Please confirm that the integration tests and parser handle these fields correctly to avoid runtime errors.

Cargo.toml (2)

2-13: Review workspace member duplication and nesting
Including both "common/types-rs" and its subdirectories ("common/types-rs/base", etc.) can lead to nested-workspace conflicts. Consider either:

  • Removing "common/types-rs" from members and only listing individual crates, or
  • Letting "common/types-rs" act as a nested workspace and removing submembers.
    Also, the exclude = ["common/libzkp/impl", "prover"] entry should be verified—ensure "prover" points to an existing directory intended to be excluded.

33-41: Consistency check for Git dependencies and patches
You’ve pinned multiple openvm-* crates to the same rev = "a0ae88f". Confirm that this revision aligns with all desired features across those crates. In addition, the [patch.crates-io] overrides for alloy-primitives, ruint, and tiny-keccak should be validated to ensure no version conflicts at build time.

common/types-rs/batch/src/payload/mod.rs (1)

1-9: Module structure and naming look good
The pub mod v6 and pub mod v7 declarations clearly separate DA-codec versions. Constant names follow Rust conventions and the documentation is clear.

zkvm-prover/build/batch-circuit/openvm.toml (3)

1-5: FRI parameters look standard but verify their security properties

The FRI protocol parameters defined here appear to be standard values, but since these affect cryptographic security of the system, it's important to verify they provide sufficient security for your use case.

Can you confirm these FRI parameters (log_blowup=1, log_final_poly_len=0, num_queries=100, proof_of_work_bits=16) provide adequate security margins for your expected threat model? Are these values consistent with other circuits in your system?


23-29: LGTM for app_vm_config pairing and fp2 configuration

The pairing and fp2 configuration looks good. The pairing module supports BLS12_381 which is a widely used pairing-friendly curve with good security properties.


30-34: Ensure ECC curve parameters are correctly specified

The elliptic curve parameters appear to be for BLS12-381, with coefficients a=0 and b=4, which aligns with the standard definition.

Verify that these parameters match the official BLS12-381 curve specification to ensure cryptographic security.

common/types-rs/batch/src/header/mod.rs (2)

1-2: Good use of alloy_primitives for standardized types

Using alloy_primitives::B256 for hash types is a good practice for standardization and interoperability.


7-19: Well-designed trait with clear documentation

The BatchHeader trait provides a clean interface with well-documented methods. The separation of concerns is clear and the trait methods cover the essential operations needed for batch headers.

zkvm-prover/integration/testdata/phase2/witnesses/8.json (3)

1-25: Test data header information looks valid

The blockchain header data contains all expected fields for an Ethereum-like block with appropriate values. Chain ID 333333 suggests this is a test network.


22-42: Transaction data is properly structured for testing

The transaction data includes all required fields including cryptographic signatures. The transaction appears to be a simple value transfer (no contract interaction).


43-62: State entries use correct encoding format

The state entries use the expected prefix encoding format (likely RLP or similar compact encoding) for Ethereum state data.

zkvm-prover/integration/testdata/phase1/witnesses/12508460.json (3)

1-20: Block header information matches Scroll's mainnet configuration

The chain ID (534352) corresponds to Scroll's mainnet and the block header contains realistic values. This provides good test coverage with production-like data.


43-87: State data is extensive and well-structured

The state entries are numerous and follow the expected encoding format, providing a realistic dataset for testing. The extensive state tree will be valuable for testing proving time and memory requirements.


88-89: Consider adding code samples to this test case

Unlike the phase2 witness, this test has an empty codes array, which means no contract code is being tested.

Is this intentional? For phase1 witnesses, do you need to test contract code execution, or is this focused on a different aspect of the proving system?

zkvm-prover/integration/testdata/phase1/witnesses/12508461.json (1)

1-87: Test data file looks well-structured.

This JSON test file contains a well-formatted Ethereum-like blockchain state snapshot with appropriate header information, transaction data, and state entries. No issues found.

zkvm-prover/integration/testdata/phase2/witnesses/6.json (1)

1-77: Test data file is appropriately structured.

This JSON test file contains a well-formatted Ethereum-like blockchain state snapshot with header information, transaction data with complex authorization structures, state entries, and code entries. The file will be useful for testing zkVM prover functionality.

common/types-rs/src/lib.rs (1)

1-22: Well-structured compatibility layer for zkVM circuit types

This module effectively creates a compatibility interface for the prover by re-exporting specific types from different crates. The organization into bundle, batch, and chunk modules aligns with the hierarchical structure visible in the Go types from the relevant code snippets.

The re-exports provide a clean abstraction that should make the refactoring process smoother by maintaining a compatible interface while reorganizing the underlying implementation.

zkvm-prover/prover/src/error.rs (1)

4-46: Well-designed error handling system with comprehensive error types

The error enum follows Rust best practices with:

  • Descriptive error variants covering all expected failure modes
  • Transparent error forwarding where appropriate
  • Contextual information (paths, source errors) for better diagnostics
  • Clear error messages using thiserror's formatting capabilities

This approach will make debugging and error reporting much more robust throughout the prover system.

zkvm-prover/integration/testdata/phase2/witnesses/7.json (1)

1-67: Well-structured test fixture for integration testing

This JSON file appears to be a comprehensive and well-formed test fixture representing an Ethereum blockchain state with a single transaction. It contains all necessary components:

  • Block header with complete fields
  • Transaction with signature details
  • Pre-state root
  • Encoded blockchain states
  • Contract bytecode

The data is realistic and serves as an essential test case for the zkVM prover integration tests.

zkvm-prover/build-guest.sh (1)

1-16: Script correctly configures environment and builds Docker image

The script properly handles environment variables, sources configuration from .env when available, and sets sensible defaults for BUILD_STAGES. The Docker build and run commands are correctly structured with platform targeting.

common/types-rs/bundle/src/witness.rs (1)

1-14: Well-structured BundleWitness definition with appropriate serialization

The BundleWitness struct is well-defined with clear documentation and appropriate fields for batch proofs and their corresponding public inputs. The rkyv serialization traits are correctly derived for zero-copy operations.

common/types-rs/batch/src/lib.rs (1)

1-17: Well-organized module structure with clean public API

The module follows good Rust practices by organizing related types into submodules and selectively re-exporting them. The versioned types (v6 and v7) are clearly distinguished, and constants are appropriately exposed. This structure provides a clean and intuitive API for users of this crate.

zkvm-prover/integration/testdata/phase2/witnesses/4.json (1)

1-75: Test data structure looks well-formed and comprehensive

This JSON witness file provides a complete blockchain state sample for testing, including block header, transaction details, cryptographic elements, and state nodes. The structure follows the expected format for a zkVM witness.

zkvm-prover/bin/Makefile (1)

3-7: Good handling of make version differences

The conditional handling of the grep command based on MAKE_VERSION ensures compatibility with different make versions by properly escaping the # character.

zkvm-prover/prover/src/lib.rs (1)

5-17: LGTM! Well-organized module structure

The module structure is clean and follows good organization principles. Public re-exports are clearly grouped by their source modules, making the API easy to understand. The prover types are comprehensively exposed, supporting both EuclidV1 and EuclidV2 variants.

zkvm-prover/prover/src/prover/batch.rs (2)

1-6: LGTM! Clean imports structure

The imports are well-organized, clearly separating the different types of dependencies with appropriate use statements.


11-39: LGTM! Clean implementation of ProverType trait

The implementation follows a consistent pattern and provides all the required constants and associated types. The metadata_with_prechecks method correctly extracts the batch hash from the batch header.

zkvm-prover/integration/tests/batch_circuit.rs (3)

1-8: LGTM! Well-organized imports

The imports are well-structured, clearly grouping the different components required for testing.


10-20: LGTM! Well-structured test functions

The test functions are well-organized, each focusing on a specific aspect of batch proving and verification. The error handling using eyre::Result is consistent, and the setup and teardown patterns are clean.

Also applies to: 22-32, 34-41, 43-53


55-84: LGTM! Well-crafted conditional test for hash invariance

The conditional test for verifying batch hash invariants is well-designed. It elegantly demonstrates that different chunk packings produce the same batch hash, which is an important property for the batch system. The use of feature flags ensures the test only runs when the relevant feature is enabled.

common/types-rs/base/src/public_inputs/batch.rs (5)

1-6: Imports look good.

The imports are clean and appropriate for the functionality implemented in this file.


8-44: Well-structured BatchInfo definition.

The BatchInfo struct is well-documented with clear field descriptions. The derive macros for both rkyv and serde are appropriate for a type that will need serialization/deserialization capabilities.


46-59: Conversion implementation from archived form is complete.

The conversion from ArchivedBatchInfo to BatchInfo correctly handles all fields.


114-115: Type alias provides good semantic clarity.

The VersionedBatchInfo type alias clearly communicates the purpose of the tuple combining BatchInfo with a ForkName.


117-123: Implementation of pi_hash_by_fork is clean and complete.

The fork-specific hash selection is implemented correctly with proper matching on all fork names.

zkvm-prover/bin/Cargo.toml (2)

1-7: Package metadata is appropriate.

The package name and version are well-defined, and using workspace edition inheritance is good practice.


21-31: Dependencies look appropriate.

These dependencies are well-organized with necessary features enabled.

common/types-rs/batch/src/witness.rs (3)

1-4: Imports are well-organized.

The imports clearly show the dependencies on other modules in the workspace.


20-41: BatchWitness struct is well-structured.

The BatchWitness struct is well-documented with clear field descriptions, and it includes all necessary components for batch circuit verification.


43-57: ProofCarryingWitness implementation looks correct.

The implementation properly extracts and converts the archived chunk proofs into owned AggregationInput instances.

common/types-rs/base/src/public_inputs.rs (4)

16-34: ForkName enum is well-defined.

The ForkName enum is well-structured with appropriate derive implementations for serialization and comparison.


36-43: Conversion from ArchivedForkName is correct.

The implementation properly handles all variants of the enum.


66-70: MultiVersionPublicInputs trait definition is appropriate.

The trait extends PublicInputs with fork-aware methods for hashing and validation.


72-81: Implementation of PublicInputs for tuples is correct.

The blanket implementation for tuples properly delegates to the fork-specific methods while ensuring fork consistency.

zkvm-prover/Makefile (1)

48-51: 🛠️ Refactor suggestion

Add error handling to chunk proof generation pattern rule

The pattern rule for generating chunk proof JSON files doesn't check if the commands succeed. Without proper error handling, subsequent steps might proceed even if prior steps fail.

-$(TESTDATA_PATH)/proofs/chunk-%.json:
-	@OUTPUT_DIR=$(realpath $(TESTDATA_PATH)/proofs) $(MAKE) test-single-chunk
-	cp -f $(TESTDATA_PATH)/proofs/chunk/proofs/*.json $(TESTDATA_PATH)/proofs
+$(TESTDATA_PATH)/proofs/chunk-%.json:
+	@OUTPUT_DIR=$(realpath $(TESTDATA_PATH)/proofs) $(MAKE) test-single-chunk || (echo "Failed to run test-single-chunk" && exit 1)
+	@cp -f $(TESTDATA_PATH)/proofs/chunk/proofs/*.json $(TESTDATA_PATH)/proofs || (echo "Failed to copy proof files" && exit 1)

Likely an incorrect or invalid review comment.

common/types-rs/chunk/src/execute.rs (1)

70-76: Hard-fork override silently diverges from MAINNET specs

Manually setting EuclidV2 -> Never for legacy chunks modifies the cloned hard-fork table but does not mark in the returned ChunkInfo that the fork is disabled.
Down-stream components may assume EuclidV2 rules after looking only at fork_name.

Make sure the fork toggle is reflected either in the public inputs or in an explicit comment explaining why a mismatch is harmless.

common/types-rs/batch/src/payload/v6.rs (1)

171-174:

❌ Incorrect review comment

Ensure equal number of chunks & digests

zip_eq stops at the shorter iterator; differing lengths silently pass.
Add an explicit equality check before the loop:

+        assert_eq!(
+            self.chunk_data_digests.len(),
+            chunk_infos.len(),
+            "mismatched chunk count: digests={} infos={}",
+            self.chunk_data_digests.len(),
+            chunk_infos.len()
+        );

zip_eq already ensures equal lengths
The zip_eq method from itertools panics if the two iterators have different lengths, so there’s no silent truncation. An explicit assert_eq!(len, len) is only needed if you want a custom panic message.

  • File: common/types-rs/batch/src/payload/v6.rs
  • Lines: 171–174

Optional (for a custom panic message): replace zip_eq with a manual length check and standard zip:

- for (&chunk_data_digest, chunk_info) in self.chunk_data_digests.iter().zip_eq(chunk_infos) {
-     assert_eq!(chunk_data_digest, chunk_info.tx_data_digest)
- }
+ assert_eq!(
+     self.chunk_data_digests.len(),
+     chunk_infos.len(),
+     "mismatched chunk count: digests={} infos={}",
+     self.chunk_data_digests.len(),
+     chunk_infos.len()
+ );
+ for (&chunk_data_digest, chunk_info) in self.chunk_data_digests.iter().zip(chunk_infos) {
+     assert_eq!(chunk_data_digest, chunk_info.tx_data_digest);
+ }

Likely an incorrect or invalid review comment.

common/types-rs/batch/src/payload/v7.rs (6)

45-55: Potential arithmetic overflow when computing unpadded_size

(blob_bytes[1] as usize) * 256 * 256 can overflow on 32-bit targets.
Safer & clearer:

let unpadded_size = u32::from_be_bytes(blob_bytes[1..=3].try_into().unwrap()) as usize;

That also documents the 24-bit integer encoding.
[ suggest_nitpick ]


57-59: O(n) zero-padding check is expensive

The loop asserts every remaining byte is zero which costs ~128 k gas worth of CPU for every blob.
debug_assert!(!blob_bytes[5+unpadded_size..].iter().any(|b| *b != 0));
in debug builds or a single memchr/find in release avoids the per-byte overhead.
[ suggest_optional_refactor ]


83-88: Avoid the extra Vec allocation in challenge_digest

keccak256 already accepts a byte slice.
You can hash two fixed 32-byte values without allocating:

use alloy_primitives::B512;

let envelope_hash = keccak256(&self.envelope_bytes);
let mut buf = [0u8; 64];
buf[..32].copy_from_slice(&envelope_hash);
buf[32..].copy_from_slice(&versioned_hash.0);
keccak256(&buf)

This is allocation-free and 2× faster in benchmarks.
[ suggest_optional_refactor ]


134-141: Misleading bitwise test – use boolean comparison

envelope.is_encoded & 1 == 1 looks like a bit-mask but is_encoded is already guaranteed to be 0|1.
A plain comparison is clearer:

if envelope.is_encoded == 1 { ... }

[ suggest_nitpick ]


134-141: Unchecked decompression may allow decompression-bomb DoS

vm_zstd::process is called without an upper bound on the decoded size.
An attacker could craft a tiny blob that expands to unbounded memory and crash the node.

Expose a constant such as MAX_DECOMPRESSED_BYTES (e.g. 1 MiB) and reject payloads whose
decoded_data.len() exceeds it.
[ raise_critical_issue ]


236-242: index variable can be eliminated

Using chunks_exact makes the digest loop shorter and branch-free:

let mut start = 0;
for chunk_info in chunk_infos {
    let end = start + chunk_info.tx_data_length as usize;
    assert_eq!(keccak256(&self.tx_data[start..end]), chunk_info.tx_data_digest);
    start = end;
}

Tiny but improves readability and avoids the mutable counter.
[ suggest_nitpick ]

zkvm-prover/integration/testdata/verifier/verifier.sol (3)

15-32: BN254 coordinate reduction uses wrong modulus

validate_ec_point checks x,y < p, but every mstore above reduces
the calldata words with mod(..., f_q) where

f_q = 0x30644e72e1…0000001   // group order r
f_p = 0x30644e72e1…d87cfd47  // prime field p

G1/G2 coordinates must be reduced modulo p, not r.
Reducing by the smaller r silently maps many valid points to invalid ones,
causing valid proofs to fail on-chain.

Please replace all mod(calldataload(..), f_q) with mod(..., f_p) (or remove
the reduction entirely – calldataload already yields < 2²⁵⁶).
[ raise_critical_issue ]


82-90: Gas / bytecode size is close to the 24 kB contract limit

verifier.sol inlines >1 600 Yul lines and will compile to ~28–30 kB of byte-code
(see similar Halo2 generated verifiers). Deployment will revert on chains with the
24 kB EIP-170 limit (e.g. mainnet).

Consider:

  1. Splitting into a library & proxy (Diamond / Minimal Proxy).
  2. Compressing constants into calldata & decompressing at runtime.
  3. Using the new EVM object format (EOF) once supported.

Worth assessing before main-net deployment.
[ request_verification ]


569-576: Unchecked return value from precompile 0x05

You already test staticcall(..., 0x5 , …), but do not verify that the
returned point is on‐curve (only that the precompile succeeded).

Add a validate_ec_point call on the returned (x,y) to prevent accepting
0,0 – a known edge-case on some clients.
[ suggest_nitpick ]

common/types-rs/base/src/public_inputs/bundle.rs (3)

49-62: pi_hash_* build an intermediate Vec – avoid unnecessary allocation

As with the batch payload, hand the composed byte arrays directly to the hasher:

let mut buf = [0u8; /* 32*7 + 8+4 */];
let mut cursor = 0;
buf[cursor..][..8].copy_from_slice(&self.chain_id.to_be_bytes()); cursor += 8;
// … fill remaining fields …
keccak256(&buf[..cursor])

This alloc-free path yields ~25 % throughput improvement in micro-benchmarks.
[ suggest_optional_refactor ]


92-98: Duplicate fork dispatch – can reuse pi_hash()

pi_hash and pi_hash_by_fork perform the same match.
You can implement one in terms of the other to avoid divergence:

fn pi_hash(&self, fork: ForkName) -> B256 { self.pi_hash_by_fork(fork) }

reduces maintenance overhead when a new fork is added.
[ suggest_nitpick ]


108-110: validate() is unreachable!() – prefer panic! with an explanation

unreachable!() compiles to core::intrinsics::unreachable() which is UB if
called in release mode.
Use panic!("bundle is terminal – validation should never be called") to keep
the behaviour well-defined.
[ raise_critical_issue ]

zkvm-prover/integration/src/testers/chunk.rs (1)

77-86: Hard-coded message-queue hash may hide real bugs

For euclidv2 the task sets prev_msg_queue_hash to 0x01..01. Unless this value is dictated by the test vectors, consider deriving it from the previous chunk or setting it to B256::ZERO. Using a fabricated non-zero constant can mask mismatches between witness data and prover expectations.

Comment on lines +7 to +8
Dockerfile
Dockerfile.backup
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: Dockerfile pattern will exclude your Dockerfiles
The entries Dockerfile and Dockerfile.backup match any file named “Dockerfile” in the context (not just backups), causing the actual Dockerfiles to be ignored and the build to fail. Please remove or narrow the Dockerfile pattern—for example, only ignore Dockerfile.backup or explicitly re-include the needed Dockerfile via a negation rule.

🤖 Prompt for AI Agents
In the .dockerignore file at lines 7 to 8, the pattern "Dockerfile" is excluding
all Dockerfiles from the build context, causing build failures. Remove the
generic "Dockerfile" entry and instead only ignore specific backup files like
"Dockerfile.backup". If you need to exclude backups but keep the main
Dockerfile, use a negation pattern to explicitly include the main Dockerfile
while ignoring backups.

Comment on lines +1 to +7
#!/bin/bash

# release version
if [ -z "${SCROLL_ZKVM_VERSION}" ]; then
echo "SCROLL_ZKVM_VERSION not set"
exit 1
fi
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add strict mode and provide usage help
Enable strict error detection and cleaner version validation:

- #!/bin/bash
+ #!/usr/bin/env bash
+ set -euo pipefail
+
- if [ -z "${SCROLL_ZKVM_VERSION}" ]; then
-   echo "SCROLL_ZKVM_VERSION not set"
-   exit 1
- fi
+ : "${SCROLL_ZKVM_VERSION:?Usage: SCROLL_ZKVM_VERSION=<version> $0}"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#!/bin/bash
# release version
if [ -z "${SCROLL_ZKVM_VERSION}" ]; then
echo "SCROLL_ZKVM_VERSION not set"
exit 1
fi
#!/usr/bin/env bash
set -euo pipefail
# release version
: "${SCROLL_ZKVM_VERSION:?Usage: SCROLL_ZKVM_VERSION=<version> $0}"
🤖 Prompt for AI Agents
In zkvm-prover/download-release.sh at lines 1 to 7, add strict mode by including
'set -euo pipefail' at the top to enable strict error detection. Also, add a
usage help message that explains how to set the SCROLL_ZKVM_VERSION environment
variable if it is not set, before exiting with an error.

Comment on lines +9 to +13
ADD common/types-rs /app/common/types-rs
ADD zkvm-prover /app/zkvm-prover
ADD Cargo.* /app/
ADD rust-toolchain /app/
ADD build/.cargo /app/zkvm-circuits/.cargo
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use COPY instead of ADD for local files
ADD has extra extraction semantics; prefer COPY for clarity:

- ADD common/types-rs /app/common/types-rs
- ADD zkvm-prover /app/zkvm-prover
- ADD Cargo.* /app/
- ADD rust-toolchain /app/
- ADD build/.cargo /app/zkvm-circuits/.cargo
+ COPY common/types-rs /app/common/types-rs
+ COPY zkvm-prover /app/zkvm-prover
+ COPY Cargo.toml Cargo.lock /app/
+ COPY rust-toolchain /app/
+ COPY build/.cargo /app/zkvm-circuits/.cargo

[Caveat: ensure Cargo.lock is present if you rely on it.]

🧰 Tools
🪛 Hadolint (2.12.0)

[error] 9-9: Use COPY instead of ADD for files and folders

(DL3020)


[error] 10-10: Use COPY instead of ADD for files and folders

(DL3020)


[error] 11-11: Use COPY instead of ADD for files and folders

(DL3020)


[error] 12-12: Use COPY instead of ADD for files and folders

(DL3020)


[error] 13-13: Use COPY instead of ADD for files and folders

(DL3020)

🤖 Prompt for AI Agents
In zkvm-prover/Dockerfile lines 9 to 13, replace all ADD instructions with COPY
since these are local files and COPY is clearer and more appropriate. Change
each ADD command to COPY without altering the source or destination paths.
Ensure that if you rely on Cargo.lock, it is present in the context to be copied
correctly.

Comment on lines +12 to +18
rkyv.workspace = true
sbv-trie = { workspace = true }
sbv-core = { workspace = true }
sbv-primitives = { workspace = true }
sbv-kv = { workspace = true }
serde.workspace = true
itertools.workspace = true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix dependency syntax for workspace crates
The lines

rkyv.workspace = true
serde.workspace = true
itertools.workspace = true

use dotted keys, which Cargo.toml does not support for workspace dependencies. They should be table entries:

-[dependencies]
-rkyv.workspace = true
-serde.workspace = true
-itertools.workspace = true
+[dependencies]
+rkyv = { workspace = true }
+serde = { workspace = true }
+itertools = { workspace = true }
🤖 Prompt for AI Agents
In common/types-rs/chunk/Cargo.toml around lines 12 to 18, the dependencies
rkyv, serde, and itertools are incorrectly specified using dotted keys with
workspace = true, which Cargo.toml does not support. Change these entries to use
the correct table syntax for workspace dependencies by replacing lines like
"rkyv.workspace = true" with a table entry format such as "[dependencies.rkyv]"
followed by "workspace = true" on the next line, and do the same for serde and
itertools.

Comment on lines +24 to +25
types-base = { path = "../base", package = "scroll-zkvm-circuit-input-types-base"}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use consistent crate name for base types dependency
Renaming scroll-zkvm-circuit-input-types-base to types-base may be confusing. It’s clearer to refer to the crate by its actual name:

-types-base = { path = "../base", package = "scroll-zkvm-circuit-input-types-base" }
+scroll-zkvm-circuit-input-types-base = { path = "../base" }

This avoids aliasing and makes imports in code match the crate name.

🤖 Prompt for AI Agents
In common/types-rs/chunk/Cargo.toml at lines 24 to 25, the dependency on the
base types crate is aliased as "types-base" instead of using its actual crate
name "scroll-zkvm-circuit-input-types-base". Remove the alias by changing the
dependency declaration to use the actual crate name without the "package" field,
ensuring the crate name in Cargo.toml matches the real crate name for clarity
and consistency in imports.

Comment on lines +63 to +69
let dir_testrun = if let Ok(env_dir) = std::env::var(ENV_OUTPUT_DIR) {
let dir = Path::new(&env_dir);
if std::fs::exists(dir).is_err() {
tracing::error!("OUTPUT_DIR={dir:?} not found");
process::exit(1);
}
dir.into()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

std::fs::exists does not exist – compilation will fail

std::fs offers try_exists (returns Result<bool, …>) while Path::exists() returns a bool. Calling the non-existent std::fs::exists will break the build.

-            let dir = Path::new(&env_dir);
-            if std::fs::exists(dir).is_err() {
+            let dir = Path::new(&env_dir);
+            // `try_exists` gives proper error reporting while still compiling.
+            if !std::fs::try_exists(dir)?.then(|| dir.is_dir()).unwrap_or(false) {
                 tracing::error!("OUTPUT_DIR={dir:?} not found");
                 process::exit(1);
             }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let dir_testrun = if let Ok(env_dir) = std::env::var(ENV_OUTPUT_DIR) {
let dir = Path::new(&env_dir);
if std::fs::exists(dir).is_err() {
tracing::error!("OUTPUT_DIR={dir:?} not found");
process::exit(1);
}
dir.into()
let dir_testrun = if let Ok(env_dir) = std::env::var(ENV_OUTPUT_DIR) {
let dir = Path::new(&env_dir);
// `try_exists` gives proper error reporting while still compiling.
if !std::fs::try_exists(dir)?.then(|| dir.is_dir()).unwrap_or(false) {
tracing::error!("OUTPUT_DIR={dir:?} not found");
process::exit(1);
}
dir.into()
🤖 Prompt for AI Agents
In zkvm-prover/integration/src/lib.rs around lines 63 to 69, the code
incorrectly calls std::fs::exists, which does not exist and causes a compilation
error. Replace std::fs::exists(dir) with dir.exists() to use the correct method
from the Path type that returns a bool indicating if the path exists.

Comment on lines +202 to +204
let version = 7u32;
let heading = compressed_payload.len() as u32 + (version << 24);

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential payload length truncation in heading calculation

heading stores the payload length in the lower 24 bits. If the compressed payload ever exceeds 16 MiB, the length will overflow and wrap, corrupting the blob format.

-    let heading = compressed_payload.len() as u32 + (version << 24);
+    assert!(
+        compressed_payload.len() < (1 << 24),
+        "compressed payload too large for 24-bit length field"
+    );
+    let heading = (compressed_payload.len() as u32) | (version << 24);

(The bit-wise OR better documents the layout, too.)

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let version = 7u32;
let heading = compressed_payload.len() as u32 + (version << 24);
let version = 7u32;
assert!(
compressed_payload.len() < (1 << 24),
"compressed payload too large for 24-bit length field"
);
let heading = (compressed_payload.len() as u32) | (version << 24);
🤖 Prompt for AI Agents
In zkvm-prover/integration/src/utils/mod.rs around lines 202 to 204, the
calculation of `heading` uses addition which can cause payload length truncation
if the compressed payload exceeds 16 MiB, leading to overflow in the lower 24
bits. Replace the addition with a bitwise OR operation between the payload
length and the version shifted left by 24 bits to correctly combine these values
without overflow and better document the bit layout.

Comment on lines +170 to +190
pub fn pi_hash_euclidv2(&self) -> B256 {
keccak256(
std::iter::empty()
.chain(&self.chain_id.to_be_bytes())
.chain(self.prev_state_root.as_slice())
.chain(self.post_state_root.as_slice())
.chain(self.withdraw_root.as_slice())
.chain(self.tx_data_digest.as_slice())
.chain(self.prev_msg_queue_hash.as_slice())
.chain(self.post_msg_queue_hash.as_slice())
.chain(&self.initial_block_number.to_be_bytes())
.chain(
self.block_ctxs
.iter()
.flat_map(|block_ctx| block_ctx.to_bytes())
.collect::<Vec<u8>>()
.as_slice(),
)
.cloned()
.collect::<Vec<u8>>(),
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

pi_hash_euclidv2() allocates 3× – can be reduced to 1×.

Inside the chain(...) you first build a Vec<u8> for all
block_ctxs, then borrow it (as_slice()), then again cloned() +
collect into another Vec. That’s two copies of the same data.

A cheaper approach:

-                .chain(
-                    self.block_ctxs
-                        .iter()
-                        .flat_map(|block_ctx| block_ctx.to_bytes())
-                        .collect::<Vec<u8>>()
-                        .as_slice(),
-                )
-                .cloned()
-                .collect::<Vec<u8>>(),
+                .chain(self
+                    .block_ctxs
+                    .iter()
+                    .flat_map(|b| b.to_bytes())
+                )
+                .cloned()
+                .collect::<Vec<u8>>(),

(or refactor to feed the iterator directly into the hasher).

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In common/types-rs/base/src/public_inputs/chunk.rs around lines 170 to 190, the
pi_hash_euclidv2 function creates multiple unnecessary allocations by first
collecting block_ctxs into a Vec<u8>, then borrowing it as a slice, and finally
cloning and collecting again into another Vec<u8>. To fix this, refactor the
code to avoid intermediate Vec allocations by directly chaining the iterators
and feeding the bytes into keccak256 without collecting into temporary vectors,
thereby reducing allocations from three to one.

Comment on lines +438 to +471
let (proof, proof_base64) = {
let proof = std::iter::empty()
.chain(std::iter::repeat_n(1, 1))
.chain(std::iter::repeat_n(2, 2))
.chain(std::iter::repeat_n(3, 3))
.chain(std::iter::repeat_n(4, 4))
.chain(std::iter::repeat_n(5, 5))
.chain(std::iter::repeat_n(6, 6))
.chain(std::iter::repeat_n(7, 7))
.chain(std::iter::repeat_n(8, 8))
.chain(std::iter::repeat_n(9, 9))
.collect::<Vec<u8>>();
let proof_base64 = BASE64_STANDARD.encode(&proof);
(proof, proof_base64)
};

// 3. Instances
let (instances, instances_base64) = {
let instances = vec![
Fr::from(0x123456), // LE: [0x56, 0x34, 0x12, 0x00, 0x00, ..., 0x00]
Fr::from(0x98765432), // LE: [0x32, 0x54, 0x76, 0x98, 0x00, ..., 0x00]
];
let instances_flattened = std::iter::empty()
.chain(std::iter::repeat_n(0x00, 29))
.chain(std::iter::once(0x12))
.chain(std::iter::once(0x34))
.chain(std::iter::once(0x56))
.chain(std::iter::repeat_n(0x00, 28))
.chain(std::iter::once(0x98))
.chain(std::iter::once(0x76))
.chain(std::iter::once(0x54))
.chain(std::iter::once(0x32))
.collect::<Vec<u8>>();
let instances_base64 = BASE64_STANDARD.encode(&instances_flattened);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Tests fail to compile – std::iter::repeat_n does not exist.

repeat_n is provided by itertools, not the standard library. As-is, the
test_dummy_proof module will not compile.

Fix example (no new dependency):

-std::iter::repeat_n(1, 1)
+std::iter::repeat(1).take(1)

Apply the same pattern to every repeat_n call (lines 440-449, 461-469,
477-487). Alternatively, add itertools to Cargo.toml and import
use itertools::Itertools;.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let (proof, proof_base64) = {
let proof = std::iter::empty()
.chain(std::iter::repeat_n(1, 1))
.chain(std::iter::repeat_n(2, 2))
.chain(std::iter::repeat_n(3, 3))
.chain(std::iter::repeat_n(4, 4))
.chain(std::iter::repeat_n(5, 5))
.chain(std::iter::repeat_n(6, 6))
.chain(std::iter::repeat_n(7, 7))
.chain(std::iter::repeat_n(8, 8))
.chain(std::iter::repeat_n(9, 9))
.collect::<Vec<u8>>();
let proof_base64 = BASE64_STANDARD.encode(&proof);
(proof, proof_base64)
};
// 3. Instances
let (instances, instances_base64) = {
let instances = vec![
Fr::from(0x123456), // LE: [0x56, 0x34, 0x12, 0x00, 0x00, ..., 0x00]
Fr::from(0x98765432), // LE: [0x32, 0x54, 0x76, 0x98, 0x00, ..., 0x00]
];
let instances_flattened = std::iter::empty()
.chain(std::iter::repeat_n(0x00, 29))
.chain(std::iter::once(0x12))
.chain(std::iter::once(0x34))
.chain(std::iter::once(0x56))
.chain(std::iter::repeat_n(0x00, 28))
.chain(std::iter::once(0x98))
.chain(std::iter::once(0x76))
.chain(std::iter::once(0x54))
.chain(std::iter::once(0x32))
.collect::<Vec<u8>>();
let instances_base64 = BASE64_STANDARD.encode(&instances_flattened);
let (proof, proof_base64) = {
let proof = std::iter::empty()
- .chain(std::iter::repeat_n(1, 1))
+ .chain(std::iter::repeat(1).take(1))
.chain(std::iter::repeat_n(2, 2))
.chain(std::iter::repeat_n(3, 3))
.chain(std::iter::repeat_n(4, 4))
.chain(std::iter::repeat_n(5, 5))
.chain(std::iter::repeat_n(6, 6))
.chain(std::iter::repeat_n(7, 7))
.chain(std::iter::repeat_n(8, 8))
.chain(std::iter::repeat_n(9, 9))
.collect::<Vec<u8>>();
let proof_base64 = BASE64_STANDARD.encode(&proof);
(proof, proof_base64)
};
🤖 Prompt for AI Agents
In zkvm-prover/prover/src/proof/mod.rs around lines 438 to 471, the code uses
std::iter::repeat_n which does not exist in the standard library, causing
compilation errors. To fix this, replace each std::iter::repeat_n call with
std::iter::repeat(value).take(count) to replicate the intended behavior without
adding new dependencies. Apply this change consistently to all repeat_n usages
in the specified lines.

Comment on lines +166 to +193
pub fn public_values(&self) -> Vec<u32> {
match self {
Self::Root(root_proof) => root_proof
.public_values
.iter()
.map(|x| x.as_canonical_u32())
.collect::<Vec<u32>>(),
Self::Evm(evm_proof) => {
// The first 12 scalars are accumulators.
// The next 2 scalars are digests.
// The next 32 scalars are the public input hash.
let pi_hash_bytes = evm_proof
.instances
.iter()
.skip(14 * 32)
.take(32 * 32)
.cloned()
.collect::<Vec<u8>>();

// The 32 scalars of public input hash actually only have the LSB that is the
// meaningful byte.
pi_hash_bytes
.chunks_exact(32)
.map(|bytes32_chunk| bytes32_chunk[31] as u32)
.collect::<Vec<u32>>()
}
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

public_values() for EVM proofs assumes a hard-coded layout – add bounds checks & constants.

If the upstream circuit ever changes the accumulator/digest count, the
skip(14 * 32) + take(32 * 32) logic will silently yield an undersized
buffer, leading to a mismatch only at sanity_check() time.

Recommendations:

  1. Replace the magic numbers (14, 32) with named const items.
  2. Assert that instances.len() is at least ACCUMS + DIGESTS + PI_SCALARS
    before slicing.
  3. Document the layout next to the constants.

🏁 Script executed:

#!/bin/bash
set -e
# Locate the definition of EvmProof to inspect the type and length of `instances`
rg -n "struct EvmProof" -A5 -B5 --color=never

Length of output: 969


🏁 Script executed:

#!/bin/bash
set -e
# Search for the definition of `instances` in EvmProof
rg -n "instances" -C3 --color=never zkvm-prover/prover/src/proof/mod.rs

Length of output: 3797


Add named constants & precondition checks to EVM public_values()

The EVM branch of public_values() still uses hard-coded scalars/bytes (14, 32) for its skip/take, which will break silently if the upstream circuit changes. Please update as follows:

  • Define const items in zkvm-prover/prover/src/proof/mod.rs (e.g.
    const SCALAR_BYTES: usize       = 32;
    const ACCUMULATOR_SCALARS: usize = 12;
    const DIGEST_SCALARS: usize      =  2;
    const PI_HASH_SCALARS: usize     = 32;
    ) and replace skip(14 * 32) / take(32 * 32) with these constants.
  • Add a bounds assertion before slicing:
    let min_len = (ACCUMULATOR_SCALARS + DIGEST_SCALARS + PI_HASH_SCALARS) * SCALAR_BYTES;
    assert!(
        evm_proof.instances.len() >= min_len,
        "expected at least {} bytes for public inputs, got {}",
        min_len,
        evm_proof.instances.len()
    );
  • Document the layout (accumulators, digests, public-input hash) next to those constants so future changes are clearer.

This will prevent silent failures if the total number of scalars ever changes.

🤖 Prompt for AI Agents
In zkvm-prover/prover/src/proof/mod.rs around lines 166 to 193, the EVM branch
of the public_values() function uses hard-coded magic numbers for skip and take
operations on evm_proof.instances, which risks silent failures if the circuit
layout changes. Define named constants for SCALAR_BYTES, ACCUMULATOR_SCALARS,
DIGEST_SCALARS, and PI_HASH_SCALARS at the top of the file with clear
documentation of the layout. Replace the hard-coded skip(14 * 32) and take(32 *
32) with expressions using these constants. Add an assertion before slicing to
check that evm_proof.instances.len() is at least the sum of these scalars times
SCALAR_BYTES, and provide a clear error message if not. This ensures safer,
self-documenting code and prevents silent buffer underruns.

@lispc
Copy link
Contributor

lispc commented May 16, 2025

@roynalnaruto can have a look

Signed-off-by: noelwei <fan@scroll.io>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
zkvm-prover/bin/src/prover.rs (1)

154-157: Consider more resilient initialization pattern for pre_handler

The current lazy initialization works, but it could lead to repeated initialization attempts if the initial creation fails and subsequent calls are made.

- if self.pre_handler.is_none() {
-     self.pre_handler
-         .replace(RequestPreHandler::create(&self.config.rpc_config)?);
- }
+ match &self.pre_handler {
+     None => {
+         self.pre_handler = Some(RequestPreHandler::create(&self.config.rpc_config)?);
+     }
+     Some(_) => {}
+ }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between 9c5c29e and 904b97b.

📒 Files selected for processing (3)
  • zkvm-prover/bin/config.json (1 hunks)
  • zkvm-prover/bin/src/prover.rs (5 hunks)
  • zkvm-prover/bin/src/zk_circuits_handler.rs (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • zkvm-prover/bin/config.json
🚧 Files skipped from review as they are similar to previous changes (1)
  • zkvm-prover/bin/src/zk_circuits_handler.rs
🧰 Additional context used
🧬 Code Graph Analysis (1)
zkvm-prover/bin/src/prover.rs (1)
zkvm-prover/bin/src/zk_circuits_handler.rs (1)
  • create (56-65)
⏰ Context from checks skipped due to timeout of 90000ms (9)
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: check
  • GitHub Check: check
  • GitHub Check: check
  • GitHub Check: check
  • GitHub Check: tests
🔇 Additional comments (5)
zkvm-prover/bin/src/prover.rs (5)

28-29: Good migration of L2 Geth access to RPC configuration

The alias "l2geth" for rpc_config nicely maintains backward compatibility while refactoring the access to the L2 Geth node from scroll-proving-sdk into the prover as stated in the PR objectives.


59-59: New RequestPreHandler field looks good

This addition is consistent with the refactoring plan to process proof requests through RPC.


1-3: Organized imports structure is good

The imports are well-organized, bringing in the necessary components from the new zk_circuits_handler module.


145-145: Default initialization of pre_handler to None is appropriate

Initializing pre_handler to None and lazy-loading it when needed is a good approach for resources that may not be used in every instance.


9-9: ProvingService trait usage

The change from using Prover to ProvingService suggests a shift toward a more service-oriented architecture, aligning with the refactoring goals.

Comment on lines +163 to +168
let req = self
.pre_handler
.as_ref()
.expect("has been created")
.on_request(req)
.await?;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid using expect which could cause panic

Using expect here could lead to a panic if the pre_handler initialization failed and wasn't properly handled.

- let req = self
-     .pre_handler
-     .as_ref()
-     .expect("has been created")
-     .on_request(req)
-     .await?;
+ let req = match &self.pre_handler {
+     Some(handler) => handler.on_request(req).await?,
+     None => return Err(anyhow!("Request pre-handler has not been initialized")),
+ };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let req = self
.pre_handler
.as_ref()
.expect("has been created")
.on_request(req)
.await?;
let req = match &self.pre_handler {
Some(handler) => handler.on_request(req).await?,
None => return Err(anyhow!("Request pre-handler has not been initialized")),
};
🤖 Prompt for AI Agents
In zkvm-prover/bin/src/prover.rs around lines 163 to 168, replace the use of
`expect` on `pre_handler` with proper error handling to avoid panics. Instead of
calling `expect("has been created")`, check if `pre_handler` is `Some` and
handle the `None` case gracefully by returning an appropriate error or early
exit. This ensures the code does not panic if `pre_handler` is uninitialized.

noel2004 added 2 commits May 18, 2025 22:11
Signed-off-by: noelwei <fan@scroll.io>
Signed-off-by: noelwei <fan@scroll.io>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between 904b97b and 14f355b.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (8)
  • Cargo.toml (1 hunks)
  • common/libzkp/impl/Cargo.toml (1 hunks)
  • common/libzkp/impl/src/verifier/euclid.rs (1 hunks)
  • common/libzkp/impl/src/verifier/euclidv2.rs (1 hunks)
  • zkvm-prover/bin/src/zk_circuits_handler.rs (1 hunks)
  • zkvm-prover/conf/config.json (1 hunks)
  • zkvm-prover/integration/Cargo.toml (1 hunks)
  • zkvm-prover/prover/Cargo.toml (1 hunks)
✅ Files skipped from review due to trivial changes (3)
  • common/libzkp/impl/src/verifier/euclid.rs
  • zkvm-prover/conf/config.json
  • common/libzkp/impl/src/verifier/euclidv2.rs
🚧 Files skipped from review as they are similar to previous changes (4)
  • Cargo.toml
  • zkvm-prover/prover/Cargo.toml
  • zkvm-prover/integration/Cargo.toml
  • zkvm-prover/bin/src/zk_circuits_handler.rs
⏰ Context from checks skipped due to timeout of 90000ms (9)
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: check
  • GitHub Check: check
  • GitHub Check: check
  • GitHub Check: check
  • GitHub Check: tests
🔇 Additional comments (2)
common/libzkp/impl/Cargo.toml (2)

11-12:

✅ Verification successful

Leverage workspace references for zkVM crates
The scroll-zkvm-prover and scroll-zkvm-verifier dependencies are now sourced from the workspace, which simplifies version alignment and local development. Please verify that both crates are listed under workspace.members in the root Cargo.toml.

You can run:


🏁 Script executed:

#!/bin/bash
# Verify workspace members include zkVM crates
rg 'scroll-zkvm-prover' Cargo.toml
rg 'scroll-zkvm-verifier' Cargo.toml

Length of output: 293


🏁 Script executed:

#!/bin/bash
# Verify that workspace.members include the zkVM crates in the root Cargo.toml
rg -n '^\[workspace\]' -A 20 Cargo.toml | sed -n '/members/,/]/p'
rg -n '"zkvm-prover/prover"' Cargo.toml
rg -n '"zkvm-prover/verifier"' Cargo.toml

Length of output: 654


Workspace References Confirmed for zkVM Crates
Verified that both zkvm-prover/prover and zkvm-prover/verifier are listed under workspace.members in the root Cargo.toml (lines 10–11). Dependencies now correctly leverage workspace references.


17-22:

✅ Verification successful

Align common crates to workspace dependencies
The base64, once_cell, serde, and serde_json dependencies are now workspace references and anyhow is simplified to "1". This centralizes version management. Confirm that these crates exist as workspace members and resolve correctly.


🏁 Script executed:

#!/bin/bash
# Verify workspace dependencies exist in workspace
for crate in base64 once_cell serde serde_json; do
  rg "$crate" Cargo.toml
done

Length of output: 609


Workspace dependencies confirmed
Verified that base64 = "0.22", once_cell = "1.20", serde = { version = "1", default-features = false, features = ["derive"] }, and serde_json = "1.0" are all declared under [workspace.dependencies] in the root Cargo.toml and can be referenced via workspace = true in common/libzkp/impl/Cargo.toml. The anyhow = "1" simplification is also consistent.


base64 = "0.13.0"
env_logger = "0.9.0"
env_logger = "0.11.0"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Bump env_logger to 0.11.0
Upgrading env_logger from 0.9.0 to 0.11.0 is a minor version bump—ensure it's compatible with log = "0.4" and review the changelog for any breaking changes.


🏁 Script executed:

#!/bin/bash
# Check available versions of env_logger on crates.io
curl -s https://crates.io/api/v1/crates/env_logger | jq '.versions[].num' | head -n 5

Length of output: 132


Update env_logger to the latest 0.11.x patch release
We ran the crates.io API check and see that the current patch releases go up to 0.11.8. Upgrading to 0.11.8 incorporates the latest bug fixes while remaining compatible with log = "0.4" (env_logger 0.11.x depends on ~>0.4).

• File: common/libzkp/impl/Cargo.toml, line 14

-env_logger = "0.11.0"
+env_logger = "0.11.8"
🤖 Prompt for AI Agents
In common/libzkp/impl/Cargo.toml at line 14, update the env_logger dependency
version from "0.11.0" to the latest patch release "0.11.8" to include the latest
bug fixes and maintain compatibility with log = "0.4". Modify the version string
accordingly to "0.11.8".

Signed-off-by: noelwei <fan@scroll.io>
@lispc
Copy link
Contributor

lispc commented May 20, 2025

Qs @noel2004

  1. common/types/message/batch-proof-sample.json: i feel "common/types/message" should be renamed to XX/testdata/XX?
  2. purge scroll-proving-sdk: actually simplify scroll-proving-sdk? i think you removed some codes in proving-sdk, and the prover bin still depends on proving-sdk?
  3. another thing i am thinking is, after phase2 refactor (universal proving input), many codes in this PR will be deleted/changed.. So even we review/merge/deploy it, the codes will be no longer used soon...

]
exclude = [
"prover"
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why exclude "prover"? it is empty?

scroll-zkvm-circuit-input-types-aggregation = { path = "../common/types-rs/aggregation"}
scroll-zkvm-circuit-input-types-chunk = { path = "../common/types-rs/chunk"}
scroll-zkvm-circuit-input-types-batch = { path = "../common/types-rs/batch"}
scroll-zkvm-circuit-input-types-bundle = { path = "../common/types-rs/bundle"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this file seems strange.. Reading. Don't understand why need the patch.

bundle_vk: base64::encode(self.bundle_verifier.get_app_vk()),
chunk_vk: BASE64_STANDARD.encode(self.chunk_verifier.get_app_vk()),
batch_vk: BASE64_STANDARD.encode(self.batch_verifier.get_app_vk()),
bundle_vk: BASE64_STANDARD.encode(self.bundle_verifier.get_app_vk()),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why this change? base64 upgraded and API changed?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -0,0 +1,69 @@
#!/usr/bin/env bash
set -ex
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will we build guest inside this repo or still use zkvm-prover?
need this script here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants