Skip to content

green-code-initiative/creedengo-benchmark-python

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python µBenchmarks

Microbenchmarks that (in)validate CreednGo rules.

This repository is a uv project that depends on the pybench benchmarking toolkit.

Caution required

Rules often suggest an alternative way of writing small pieces of code. Measuring the effect of a small and isolated amount of code is hard, we need to be cautious.

CPython (the default implementation of Python) does not (as of may 2025) have a complex AOT (ahead of time) compiler like C or Rust, nor a complex JIT (just in time) compiler like Java, which means that you will probably not get your code removed by an optimization. However:

Required tools

To run or create benchmarks, you need uv and python (uv will install/find it for you). See Installing uv to setup your environment easily.

To load the project and its dependencies, run:

uv sync

A virtualenv will be automatically created for you.

Writing a new benchmark

  • Create a new file in microbenchs.
  • In your main, use one of the following methods of pyperf.Runner to compare different scenarios:
    • bench_func (doc).
    • timeit (doc).
    • bench_time_func (doc).

See example.py.

Running one/multiple benchmark(s)

Each benchmark file is a Python script that accepts arguments (thanks to pyperf.Runner).

Use the bench.sh script, which is a small wrapper around that. Run ./bench.sh example.py --help too see the available PyPerf options.

To run a single benchmark file:

./bench.sh filename

The benchmark results are printed to stdout and stored in results/{benchmark}.json.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •