Skip to content

Latest commit

 

History

History
63 lines (42 loc) · 1.83 KB

File metadata and controls

63 lines (42 loc) · 1.83 KB

Benchmark Scripts

scripts/benchmarks/ is the main automation entry point for model-split benchmark campaigns.

Primary Entry Point

scripts/benchmarks/run_model_split_apps.sh --help

This is the command surface used for most paper-style benchmark runs.

Key Files

File Purpose
run_model_split_apps.sh batch build-and-run launcher for the benchmark app matrix
extract_gemm_shapes.py helper to derive GEMM shapes from model descriptions or traces
run_llama2_verilator_prefill_sweep.sh focused prefill benchmark sweep for the llama2/ flow
plot_llama2_verilator_prefill_summary.py compact plotting helper for prefill sweep summaries
models.txt default model list used by the workflow

Common Examples

Build and run the selected matrix:

scripts/benchmarks/run_model_split_apps.sh --mode all --build-jobs 16 --parallel 5 --batch-size 5

Reuse existing builds and run simulations only:

scripts/benchmarks/run_model_split_apps.sh --mode run --no-rebuild-apps --no-verilate --parallel 5 --batch-size 5

Run one app as a smoke test:

scripts/benchmarks/run_model_split_apps.sh --mode run --apps bmpmm_INT2_gemma3_270m --parallel 1 --batch-size 1

Run the dedicated llama2 prefill sweep:

scripts/benchmarks/run_llama2_verilator_prefill_sweep.sh

Output Contract

Each run writes into tmp/model_app_runs/<run_name>/:

  • apps.txt
  • runner.log
  • summary.csv
  • batch_XX/<app>.log

The llama2 prefill sweep produces its own summary-oriented outputs and is best treated as a separate experiment flow from the main model-split benchmark matrix.

For the benchmark contract and output interpretation, see: