-
Notifications
You must be signed in to change notification settings - Fork 17
Open
Description
Our benchmarks should be organized so that they produce a uniform, machine-readable output. Here's what I have in mind:
Make a module interface BENCHMARK to which all the benchmarks will confirm. Then create a function that iterates over a list of such modules, runs them, and outputs the logs, or optionally outputs a csv of the results.
I imagine that to keep things simple benchmarks will have only a few dimensions:
description, domains, load size, runtime, ops/second
where
- domains is set once from an environment variable
- ops/second is derived from the load size and the runtime
Metadata
Metadata
Assignees
Labels
No labels