22
33## Commands
44
5+ ` --all-features ` does not work. Just use default features and the following commands:
6+
57``` bash
6- cargo cl # lint
7- cargo fmt --all # format
8- cargo docs # check docs
9-
10- cargo nextest run --workspace # test all
11- cargo nextest run " test_name" # test single
12- cargo nextest run " statetest" # test statetests
13- SUBDIR=stRevertTest cargo nextest run " statetest" # test single statetest
8+ cargo cl # lint
9+ cargo fmt --all # format
10+ cargo docs # check docs
11+
12+ cargo nextest run --workspace # test all
13+ cargo nextest run --workspace " test_name" # test single
14+ cargo nextest run --workspace " statetest" # test statetests
15+ SUBDIR=stRevertTest cargo nextest run --workspace " statetest" # test single statetest
1416```
1517
1618## Architecture
17- - ` revmc ` — main crate: EVM compiler, bytecode analysis, linker, and test infrastructure.
19+
20+ - ` revmc ` — thin umbrella crate that re-exports codegen and runtime APIs.
21+ - ` revmc-codegen ` — EVM compiler, bytecode analysis, linker, and compiler test infrastructure.
22+ - ` revmc-runtime ` — runtime JIT/AOT backend, worker pool, artifact store, and revm integration.
1823- ` revmc-backend ` — abstract compiler backend trait. ` revmc-llvm ` is the main implementation.
1924- ` revmc-builtins ` — runtime builtins called by JIT-compiled code (host calls, gas accounting).
2025- ` revmc-context ` — EVM execution context types bridging revm and compiled code.
@@ -31,6 +36,7 @@ during development.
3136``` bash
3237cargo r -- run --list # list available benchmarks
3338cargo r -- run usdc_proxy # compile and run a benchmark
39+ cargo r -- run usdc_proxy -o tmp/dump # compile and run a benchmark; dump files like opt.ll, remarks.txt to tmp/dump
3440cargo r -- run usdc_proxy --parse-only # parse and analyze only (no codegen)
3541cargo r -- run usdc_proxy --display # print parsed bytecode IR
3642cargo r -- run usdc_proxy --dot # render CFG as DOT/SVG
@@ -87,6 +93,11 @@ Use `cargo r -- run --list` to see available benchmark names.
8793` ./scripts/bench.py ` is the unified benchmarking tool. It collects codegen line
8894counts, compile times, jump resolution stats, and constant-input statistics.
8995
96+ The script writes its full markdown output to ` <dump_dir>/results.md ` in
97+ addition to printing it to stdout. Summary tables hide changes within a noise
98+ threshold (1% for codegen, 5% for compile times); the ` <details> ` tables still
99+ show every change.
100+
90101``` bash
91102./scripts/bench.py /tmp/bench --diff main # codegen + compile time vs main
92103./scripts/bench.py /tmp/bench --diff main usdc_proxy seaport # specific benchmarks
@@ -100,9 +111,55 @@ counts, compile times, jump resolution stats, and constant-input statistics.
100111./scripts/bench.py /tmp/bench --codegen-lines --jump-resolution # combine multiple analyses
101112```
102113
114+ ## Bench-and-PR workflow
115+
116+ When the user asks to "bench and open pr", "post results to pr", or whenever
117+ making a perf change that needs benchmark numbers in the PR description:
118+
119+ 1 . Run ` ./scripts/bench.py <dump_dir> --diff <base> ` (typically ` --diff main ` ).
120+ 2 . Build the PR body ** in a single bash command** that inlines
121+ ` <dump_dir>/results.md ` VERBATIM. Do NOT reformat, summarize, drop
122+ columns, or rewrite the numbers in the tables — ` cat ` the file as-is.
123+ 3 . Add prose explaining what the PR does ABOVE the inlined results, under a
124+ ` ## Benchmarks ` (or similar) heading.
125+ 4 . Under ` ## Benchmarks ` , ABOVE the inlined ` results.md ` , write a short
126+ textual summary of the headline numbers (e.g. the ` **TOTAL** ` row diffs
127+ from the codegen + compile-time tables, plus any notable per-bench wins
128+ or regressions worth calling out). Keep it to a few sentences or a tight
129+ bullet list — this is the at-a-glance summary readers see before the
130+ tables. The tables themselves stay verbatim.
131+ 5 . Update the PR with ` gh pr edit <number> --body-file <body.md> ` .
132+
133+ Example — write the body file with prose, summary, and verbatim results in
134+ one shot:
135+
136+ ``` bash
137+ {
138+ cat << 'EOF '
139+ Short description of what this PR does and why.
140+
141+ More prose: motivation, design notes, caveats, anything reviewers need.
142+
143+ ## Benchmarks
144+
145+ Headline numbers vs `main`: jit size -7.5%, opt.s +2.9%, total compile time
146+ +0.2%. `counter` regresses on opt.s (+26%); `seaport` is roughly flat.
147+
148+ EOF
149+ cat /tmp/bench/results.md
150+ } > /tmp/pr-body.md
151+
152+ gh pr edit 123 --body-file /tmp/pr-body.md
153+ ```
154+
155+ The heredoc holds whatever prose + summary belongs in the PR; ` cat results.md `
156+ appends the benchmark tables exactly as the script produced them.
157+
103158## Important
104159
105- - NEVER summarize benchmark results. Always post the entire, unedited output.
160+ - NEVER alter or summarize the benchmark tables themselves — always post them
161+ verbatim. A short textual summary of the headline numbers ABOVE the tables
162+ (under ` ## Benchmarks ` ) is required.
106163- NEVER delete or modify ` ./tmp/ ` — it contains manually generated IR/asm dumps used for comparison.
107164- ` tmp/dump/ ` contains dumps from ` main ` , ` tmp/dump2/ ` contains dumps from the current branch.
108165 Use these for manual ` diff ` comparison of LLVM IR and assembly.
0 commit comments