Description
For some operations that you want to benchmark, there is unavoidable overhead involved. For example, extra method calls.
This is particularly problematic in nano-benchmarks, where a single method call of overhead can ruin your results. But can also apply to micro-benchmarks and other situations.
I ran into this when comparing the cost of dynamic variant casting to explicit variant casting. With overhead, dynamic
is 2.75x faster, but without overhead, dynamic
is 4.85x faster.
I suggest that there be the option to remove overhead from results prior to scaling.
One way to achieve this would be to identify one of the benchmarked methods as a special baseline using an attribute and then to add a new normalized column to the output that is the mean minus the mean of the special baseline. The scaled column would then apply to the normalized results instead of the mean.
I propose calling this special baseline an additive baseline in contrast to the normal baseline, which I'm referring to as a multiplicative baseline. However, it could also be identified as Overhead
, Normalizer
, or something similar.
The results, with Direct
being an additive baseline and NormalCast
being the multiplicative baseline (the existing baseline), might look something like this:
Method | Mean | StdDev | Normalized | Scaled |
---------------------- |----------- |---------- |----------- |------- |
Direct | 16.2276 ns | 0.0310 ns | 0.0 ns | 0.00 |
NormalCast | 16.5152 ns | 0.0073 ns | 0.2876 ns | 1.00 |
ExplicitCovariantCast | 80.6373 ns | 0.0351 ns | 64.4097 ns | 223.96 |
DynamicCovariantCast | 29.4733 ns | 0.1694 ns | 13.2457 ns | 46.06 |