CLI tool for running Aztec contract benchmarks.
Use this tool to execute benchmark files written in TypeScript. For comparing results and generating reports in CI, use the separate companion GitHub Action: defi-wonderland/aztec-benchmark.
yarn add --dev @defi-wonderland/aztec-benchmark
# or
npm install --save-dev @defi-wonderland/aztec-benchmarkAfter installing, run the CLI using npx aztec-benchmark. By default, it looks for a Nargo.toml file in the current directory and runs benchmarks defined within it.
npx aztec-benchmark [options]Define which contracts have associated benchmark files in your Nargo.toml under the [benchmark] section:
[benchmark]
token = "benchmarks/token_contract.benchmark.ts"
another_contract = "path/to/another.benchmark.ts"The paths to the .benchmark.ts files are relative to the Nargo.toml file.
-c, --contracts <names...>: Specify which contracts (keys from the[benchmark]section) to run. If omitted, runs all defined benchmarks.--config <path>: Path to yourNargo.tomlfile (default:./Nargo.toml).-o, --output-dir <path>: Directory to save benchmark JSON reports (default:./benchmarks).-s, --suffix <suffix>: Optional suffix to append to report filenames (e.g.,_prresults intoken_pr.benchmark.json).--skip-proving: Skip proving transactions. Only measures gate counts and gas; proving time will be0in reports. When enabled, thewalletis not required in the benchmark context.
Run all benchmarks defined in ./Nargo.toml:
npx aztec-benchmark Run only the token benchmark:
npx aztec-benchmark --contracts tokenRun token and another_contract benchmarks, saving reports with a suffix:
npx aztec-benchmark --contracts token another_contract --output-dir ./benchmark_results --suffix _v2Benchmarks are TypeScript classes extending BenchmarkBase from this package.
Each entry in the array returned by getMethods can either be a plain ContractFunctionInteractionCallIntent
(in which case the benchmark name is auto-derived) or a NamedBenchmarkedInteraction object
(which includes the interaction and a custom name for reporting).
import {
Benchmark, // Alias for BenchmarkBase
type BenchmarkContext,
type NamedBenchmarkedInteraction
} from '@defi-wonderland/aztec-benchmark';
import type { PXE } from '@aztec/pxe/server';
import type { Contract } from '@aztec/aztec.js/contracts'; // Generic Contract type from Aztec.js
import type { AztecAddress } from '@aztec/aztec.js/addresses';
import type { ContractFunctionInteractionCallIntent } from '@aztec/aztec.js/authorization';
import { createStore } from '@aztec/kv-store/lmdb-v2';
import { createPXE, getPXEConfig } from '@aztec/pxe/server';
import { createAztecNodeClient, waitForNode } from '@aztec/aztec.js/node';
import { registerInitialSandboxAccountsInWallet, type TestWallet } from '@aztec/test-wallet/server';
// import { YourSpecificContract } from '../artifacts/YourSpecificContract.js'; // Replace with your actual contract artifact
// 1. Define a specific context for your benchmark (optional but good practice)
interface MyBenchmarkContext extends BenchmarkContext {
pxe: PXE;
wallet: TestWallet;
deployer: AztecAddress;
contract: Contract; // Use the generic Contract type or your specific contract type
}
export default class MyContractBenchmark extends Benchmark {
// Runs once before all benchmark methods.
async setup(): Promise<MyBenchmarkContext> {
console.log('Setting up benchmark environment...');
const { NODE_URL = 'http://localhost:8080' } = process.env;
const node = createAztecNodeClient(NODE_URL);
await waitForNode(node);
const l1Contracts = await node.getL1ContractAddresses();
const config = getPXEConfig();
const fullConfig = { ...config, l1Contracts };
// IMPORTANT: true enables proof generation for the benchmark, set it to false when using --skip-proving
fullConfig.proverEnabled = true;
const pxeVersion = 2;
const store = await createStore('pxe', pxeVersion, {
dataDirectory: 'store',
dataStoreMapSizeKb: 1e6,
});
const pxe: PXE = await createPXE(node, fullConfig, { store });
const wallet: TestWallet = await TestWallet.create(node, { ...fullConfig, proverEnabled });
const accounts: AztecAddress[] = await registerInitialSandboxAccountsInWallet(wallet);
const [deployer] = accounts;
// Deploy your contract (replace YourSpecificContract with your actual contract class)
const deployedContract = await YourSpecificContract
.deploy(wallet, /* constructor args */)
.send({ from: deployer })
.deployed();
const contract = await YourSpecificContract.at(deployedContract.address, wallet);
console.log('Contract deployed at:', contract.address.toString());
return { pxe, deployer, contract };
}
// Returns an array of interactions to benchmark.
getMethods(context: MyBenchmarkContext): Promise<Array<ContractFunctionInteractionCallIntent | NamedBenchmarkedInteraction>> {
// Ensure context is available (it should be if setup ran correctly)
if (!context || !context.contract) {
// In a real scenario, setup() must initialize the context properly.
// Throwing an error or returning an empty array might be appropriate here if setup failed.
console.error("Benchmark context or contract not initialized in setup(). Skipping getMethods.");
return [];
}
const { contract, deployer } = context;
const recipient = deployer; // Example recipient
// Replace `contract.methods.someMethodName` with actual methods from your contract.
const interactionPlain = { caller: deployer, action: contract.methods.transfer(recipient, 100n) }
const interactionNamed1 = { caller: deployer, action: contract.methods.someOtherMethod("test_value_1") };
const interactionNamed2 = { caller: deployer, action: contract.methods.someOtherMethod("test_value_2") };
return [
// Example of a plain interaction - name will be auto-derived
interactionPlain,
// Example of a named interaction
{ interaction: interactionNamed1, name: "Some Other Method (value 1)" },
// Another named interaction
{ interaction: interactionNamed2, name: "Some Other Method (value 2)" },
];
}
// Optional cleanup phase
async teardown(context: MyBenchmarkContext): Promise<void> {
console.log('Cleaning up benchmark environment...');
if (context && context.pxe) {
await context.pxe.stop();
}
}
}Note: Your benchmark code needs a valid Aztec project setup to interact with contracts.
Your BenchmarkBase implementation is responsible for constructing the ContractFunctionInteractionCallIntent objects.
If you provide a NamedBenchmarkedInteraction object, its name field will be used in reports.
If you provide a plain ContractFunctionInteractionCallIntent, the tool will attempt to derive a name from the interaction (e.g., the method name).
You can find how we use this tool for benchmarking our Aztec contracts in aztec-standards.
Your BenchmarkBase implementation is responsible for measuring and outputting performance data (e.g., as JSON). The comparison action uses this output.
Each entry in the output will be identified by the custom name you provided (if any) or the auto-derived name.
This repository includes a GitHub Action (defined in action/action.yml) designed for CI workflows. It automatically finds and compares benchmark results (conventionally named with _base and _latest suffixes) generated by previous runs of aztec-benchmark and produces a Markdown comparison report.
threshold: Regression threshold percentage (default:2.5).output_markdown_path: Path to save the generated Markdown comparison report (default:benchmark-comparison.md).
comparison_markdown: The generated Markdown report content.markdown_file_path: Path to the saved Markdown file.
This action is typically used in a workflow that runs on pull requests. It assumes a previous step or job has already run the benchmarks on the base commit and saved the results with the _base suffix (e.g., in ./benchmarks/token_base.benchmark.json).
Workflow Steps:
- Checkout the base branch/commit.
- Run
npx aztec-benchmark -s _base(saving outputs to./benchmarks). - Checkout the PR branch/current commit.
- Use this action (
./action), which will: a. Runnpx aztec-benchmark -s _latestto generate current benchmarks. b. Compare the new_latestfiles against the existing_basefiles. c. Generate the Markdown report.
# Example steps within a PR workflow job:
# (Assume previous steps checked out base, ran benchmarks with _base suffix,
# and artifacts/reports are available, potentially via actions/upload-artifact
# and actions/download-artifact if run in separate jobs)
- name: Checkout Current Code
uses: actions/checkout@v4
# (Ensure Nargo.toml and benchmark dependencies are set up)
- name: Install Dependencies
run: yarn install --frozen-lockfile
- name: Generate Latest Benchmarks, Compare, and Create Report
# This action runs 'aztec-benchmark -s _latest' internally
uses: defi-wonderland/aztec-benchmark-diff/action
id: benchmark_compare
with:
threshold: '2.0' # Optional threshold
output_markdown_path: 'benchmark_diff.md' # Optional output path
- name: Comment Report on PR
uses: peter-evans/create-or-update-comment@v4
with:
issue-number: ${{ github.event.pull_request.number }}
body-file: ${{ steps.benchmark_compare.outputs.markdown_file_path }}Refer to the action/action.yml file for the definitive inputs and description.