Skip to content

CI for benchmarks online #10

Open
Open
@lukego

Description

@lukego

This repo is cool! I am really happy to have a test suite. This seems great for people who want to maintain their own branches and keep track of how they compare with everybody else's. Like, have I broken something? Have my optimizations worked? Has somebody else made some optimizations that I should merge? etc. Just now I would like to maintain a branch called lowlevel to soak up things like intrinsics and DynASM Lua-mode so this is right on target for me.

I whipped up a Continuous Integration job to help. The CI downloads the latest code for some well-known branches, runs the benchmark suite 100 times for each branch, and reports the results. This updates automatically when any of the branches change (including the benchmark definitions).

The reason I run the benchmarks 100 times is to support tests that use randomness to exercise non-determinism in the JIT, like roulette (#9). Repeated tests mean that we can quantify how consistent the benchmark results are between runs, and once we have a metric for consistency then it is more straightforward to optimize (see LuaJIT/LuaJIT#218).

The branches I am testing now are master, v2.1, agentzh-v2.1, corsix/x64, and lukego/lowlevel. If anybody would like a branch added (or removed) just drop me a comment here. Currently the benchmark definitions are coming from my fork because I wanted to include roulette to check that variation is measured correctly.

Screenshot of the first graph (click to zoom):

benchmarks

and links:

Hope somebody else finds this useful, too! Feedback & pull requests welcome. I plan to keep this operational.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions