Skip to content

Wishlist: hook (or other means) to collect other merit factors over benchmarking tests and report it #266

Open
@callegar

Description

In many cases benchmarks can be useful to collect other merit factors in addition to timings.

For instance, when using some optimization algorithms you may want to collect data about the quality of the reached result, so that the benchmarks can be employed to evaluate which code offers the best speed-quality trade off.

In addition to that, in many cases, the additional merit factor may show a variability, depending on the specific code run (as for the timings). Again some heuristic optimization codes can be an example, where multiple run may deliver different solutions scattered around the exact optimum. In this case, even these additional merit factors may need to be evaluated statistically, based on multiple runs.

Would be great if pytest-benchmark could offer a way to deal with these situations. The extra-info field is already a very good starting point. However, being able to customize the reporting is also needed.

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions