This file provides guidance to AI coding agents when working with code in this repository.
Benchmarking suite that compares execution speed and memory usage of JS/TS formatters: Prettier, Prettier+oxc-parser, Biome, and Oxfmt. Uses external tools hyperfine (execution time) and GNU time (memory).
# Setup and run all benchmarks
pnpm run bench
# Explicit setup (downloads test data)
./init.sh
# Run individual scenarios
pnpm run bench:large-single-file # TypeScript parser.ts (~540KB single file)
pnpm run bench:js-no-embedded # Outline repository (1,925 files)
# Run all benchmarks + auto-update README.md results section
pnpm run update-readmeAll source files are ESM (.mjs). No test framework or linter is configured.
Each scenario lives in its own directory with identical structure:
bench-*/
├── bench.mjs # Scenario-specific benchmark script
├── biome.json # Biome config
├── oxfmtrc.json # Oxfmt config
├── prettierrc.json # Prettier config
├── prettierignore # Prettier ignore
└── data/ # Test data (gitignored, fetched by init.sh)
Each bench.mjs imports from shared/utils.mjs, calls setupCwd() to change to its own directory, then runs hyperfine benchmarks and memory measurements.
Common logic shared across all scenarios:
createFormatters(projectRoot, configDir)— Returns command builders for all 4 formatters.projectRootis the base fornode_modules,configDirholds config files.runHyperfine(args)— Spawns hyperfine process, returns a Promise.runMemoryBenchmarks()/measureMemory()— Measures Peak RSS via GNU time (gtimeor/usr/bin/time).checkGnuTime()— Checks for GNU time availability; warns and skips memory measurement if missing.
- Create
bench-<name>/directory withbench.mjsand formatter config files - In
bench.mjs, usecreateFormatters()andrunHyperfine()fromshared/utils.mjs - Add the directory name to the
scenariosarray inbench-all.mjs - Add any test data fetching to
init.sh
- hyperfine:
brew install hyperfine(macOS) /apt install hyperfine(Linux) - GNU time:
brew install gnu-time(macOS, installs asgtime) /apt install time(Linux)
GitHub Actions (.github/workflows/ci.yml) runs benchmarks on main push and auto-commits updated README.md with [skip ci]. On PRs, benchmarks run but results are not committed.