Skip to content

Separating benchmarks with different complexity and benchmarks with just variants #190

Open
@eregon

Description

@eregon

Hello there,
I think it would be worthwhile to separate the example in two categories:

  • Benchmarks which are faster due to the variants having different complexity (for example, https://github.com/JuanitoFatas/fast-ruby#arraybsearch-vs-arrayfind-code). Those I believe will remain with a clear advantage for one of the variants for a long time.
  • Other benchmarks, where the difference is minimal, and highly relies on the specific Ruby implementation and version, and where the slow and fast variants might switch regularly.

I think the second category deserves a clear warning that those results were measured on some version of CRuby and might not apply anymore, and likely do not apply to other Ruby implementations.

For fun, @gogainda ran these benchmarks on TruffleRuby at https://github.com/gogainda/fast-truffleruby
What I can see from a quick look is many of the differences on MRI don't exist on TruffleRuby (e.g., Sequential vs Parallel Assignment).
Also, many of these micro benchmarks optimize away (>1 billion i/s), i.e., in other words doing that operation alone costs basically nothing or like <10 cycles, which I interpret as a useful word of caution against microbenchmarks which might test something real code wouldn't, and might show differences that don't matter in practice.
I'd recommend in general to benchmark in the setup of your app/program, on the machine where the performance will matter. For example, a variant might give be 25% faster in a microbenchmark, but yield a 0% speedup on the full app and therefore be of limited value.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions