Brunch is a very simple Rust micro-benchmark runner inspired by easybench. It has roughly a million times fewer dependencies than criterion, does not require nightly, and maintains a (single) "last run" state for each benchmark, allowing it to show relative changes from run-to-run.
(The formatting is also quite pretty.)
As with all Rust benchmarking, there are a lot of caveats, and results might be artificially fast or slow. For best results:
- Build optimized;
 - Collect lots of samples;
 - Repeat identical runs to get a feel for the natural variation;
 
Brunch cannot measure time below the level of a nanosecond, so if you're trying to benchmark methods that are really fast, you may need to wrap them in a method that runs through several iterations at once. For example:
use brunch::Bench;
///# Generate Strings to Test.
fn string_seeds() -> Vec<String> {
    (0..10_000_usize).into_iter()
        .map(|i| "x".repeat(i))
        .collect()
}
///# Generate Strings to Test.
fn byte_seeds() -> Vec<Vec<u8>> {
    (0..10_000_usize).into_iter()
        .map(|i| "x".repeat(i).into_bytes())
        .collect()
}
brunch::benches!(
    Bench::new("String::len(_)")
        .run_seeded_with(string_seeds, |vals| {
            let mut len: usize = 0;
            for v in vals {
                len += v.len();
            }
            len
        }),
    Bench::new("Vec::len(_)")
        .run_seeded_with(byte_seeds, |vals| {
            let mut len: usize = 0;
            for v in vals {
                len += v.len();
            }
            len
        }),
);Add brunch to your dev-dependencies in Cargo.toml, like:
[dev-dependencies]
brunch = "0.9.*"Benchmarks should also be defined in Cargo.toml. Just be sure to set harness = false for each:
[[bench]]
name = "encode"
harness = falseThe following optional environmental variables are supported:
| Variable | Value | Description | Default | 
|---|---|---|---|
NO_BRUNCH_HISTORY | 
1 | 
Disable run-to-run history. | |
BRUNCH_HISTORY | 
Path to history file. | Load/save run-to-run history from this specific path. | std::env::temp_dir()/__brunch.last | 
The heart of Brunch is the Bench struct, which defines a single benchmark. There isn't much configuration required, but each Bench has the following:
| Data | Description | Default | 
|---|---|---|
| Name | A unique identifier. This is arbitrary, but works best as a string representation of the method itself, like foo::bar(10) | 
|
| Samples | The number of samples to collect. | 2500 | 
| Timeout | A cutoff time to keep it from running forever. | 10 seconds | 
| Method | A method to run over and over again! | 
The struct uses builder-style methods to allow everything to be set in a single chain. You always need to start with Bench::new and end with one of the runner methods — Bench::run, Bench::run_seeded, or Bench::run_seeded_with. If you want to change the sample or timeout limits, you can add Bench::with_samples or Bench::with_timeout in between.
There is also a special Bench::spacer method that can be used to inject a linebreak into the results. See below for an example.
The benches! macro is the easiest way to run Brunch benchmarks.
Simply pass a comma-separated list of all the Bench objects you want to run, and it will handle the setup, running, tabulation, and give you a nice summary at the end.
By default, this macro will generate the main() entrypoint too, but you can suppress this by adding "inline:" as the first argument.
Anyhoo, the default usage would look something like the following:
use brunch::{Bench, benches};
// Example benchmark adding 2+2.
fn callback() -> Option<usize> { 2_usize.checked_add(2) }
// Example benchmark multiplying 2x2.
fn callback2() -> Option<usize> { 2_usize.checked_mul(2) }
// Let the macro handle everything for you.
benches!(
    Bench::new("usize::checked_add(2)")
        .run(callback),
    Bench::new("usize::checked_mul(2)")
        .run(callback2),
);When declaring your own main entrypoint, you need to add "inline:" as the first argument. The list of Bench instances follow as usual after that.
use brunch::{Bench, benches};
/// # Custom Main.
fn main() {
    // A typical use case for the "inline" variant would be to declare
    // an owned variable for a benchmark that needs to return a reference
    // (to e.g. keep Rust from complaining about lifetimes).
    let v = vec![0_u8, 1, 2, 3, 4, 5];
    // The macro call goes here!
    benches!(
        inline:
        Bench::new("vec::as_slice()").run(|| v.as_slice()),
    );
    // You can also do other stuff afterwards if you want.
    eprintln!("Done!");
}For even more control over the flow, skip the macro and just use Benches directly.
If you run the example benchmark for this crate, you should see a summary like the following:
Method                         Mean    Change        Samples
------------------------------------------------------------
fibonacci_recursive(30)     2.22 ms    +1.02%    2,408/2,500
fibonacci_loop(30)         56.17 ns       ---    2,499/2,500
The Method column speaks for itself, but the numbers deserve a little explanation:
| Column | Description | 
|---|---|
| Mean | The adjusted, average execution time for a single run, scaled to the most appropriate time unit to keep the output tidy. | 
| Change | The relative difference between this run and the last run, if more than two standard deviations. | 
| Samples | The number of valid/total samples, the difference being outliers (5th and 95th quantiles) excluded from consideration. |