Add workflow for benchmark#257
Conversation
|
Not sure why it doesn't print a table with units as it does locally? Here is how the table looks locally # A tibble: 6 × 6
Benchmark PR_median PR_total_time Main_median Main_total_time Slowdown_percent_median
<chr> <bch:tm> <bch:tm> <bch:tm> <bch:tm> <dbl>
1 parents 64.1µs 479ms 62µs 473ms 3.39
2 children 64.4µs 475ms 62.7µs 471ms 2.71
3 ancestors 72µs 470ms 72.4µs 465ms -0.552
4 descendants 69.8µs 476ms 70.5µs 469ms -0.993
5 subgraph 114.1µs 486ms 114µs 437ms 0.0877
6 d_seperated 96.6µs 481ms 95.3µs 475ms 1.36 While on GA it looks like this: # A tibble: 6 × 6
Benchmark PR_median PR_total_time Main_median Main_total_time
<chr> <bench_tm> <bench_tm> <bench_tm> <bench_tm>
1 parents 0.0001281491 0.4844711 0.0001298371 0.4840001
2 children 0.0001274326 0.4841135 0.0001290006 0.4848658
3 ancestors 0.0001475360 0.4741412 0.0001499145 0.4759974
4 descendants 0.0001473451 0.4849910 0.0001503401 0.4813027
5 subgraph 0.0002215331 0.4852312 0.0002308000 0.4834147
6 d_seperated 0.0001925391 0.4805583 0.0001967761 0.4819401
Slowdown_percent_median
<dbl>
1 -1.30
2 -1.22
3 -1.59
4 -1.99
5 -4.02
6 -2.15 |
|
Nice idea! I am wondering... should we let the workflow comment on the PR too? That way you'd see the performance improvement/degradation directly? |
jolars
left a comment
There was a problem hiding this comment.
LGTM, but I have a few small suggestions. No blockers though.
| r-version: release | ||
| use-public-rspm: true | ||
|
|
||
| - name: Install dependencies |
There was a problem hiding this comment.
You should probably use the setup-r-dependencies action here, no?
| run: Rscript -e 'install.packages(c("remotes", "bench", "caugi", "optparse", "tibble"))' | ||
|
|
||
| - name: Install PR branch package | ||
| run: Rscript -e 'remotes::install_local(".", upgrade="never")' |
There was a problem hiding this comment.
Here, too, you should probably not call Rscript directly I think, or?
| ) | ||
|
|
||
| cat("\n=== Benchmark Comparison ===\n") | ||
| print(result_table, row.names = FALSE, width = Inf) |
There was a problem hiding this comment.
Maybe there's some nicer way to output a markdown table instead, what do you think?
Yeah, I thought about it, but then it should probably only do conditionally, I think (e.g, via a specific phrase in PR description/comment)? Otherwise, it probably gets annoying that it comes up when for a completely unrelated PR. |
|
Yes, I think you're right! |
Related to the discussion on Discord.
This adds a workflow that essentially just runs the relevant code from the performance article on PR and main.
It prints a table to compare the output of
bench::mark()and it fails if any of the benchmarked functions have a median time that is at least 20% slower (fairly arbitrary, but I observed a 10% difference at one point, so it needs to be more than that to avoid false positives).