-
Notifications
You must be signed in to change notification settings - Fork 715
chore: add LNSym omega benchmarks with large problems that take multiple seconds to solve #5622
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Mathlib CI status (docs):
|
|
!bench |
|
Oh this is not going to work yet, you need to add it to the bench configuration of course, my bad. |
|
Here are the benchmark results for commit 4bf37d0. Benchmark Metric Change
===============================================
- bv_decide_mul branch-misses 2.9% (16.5 σ) |
|
!bench |
|
Here are the benchmark results for commit d4808ba. |
|
Typo in your file name. |
4070279 to
5eaf27f
Compare
|
!bench |
|
Here are the benchmark results for commit 5eaf27f. Benchmark Metric Change
============================================
- bv_decide_mul task-clock 1.2% (10.7 σ)
- bv_decide_mul wall-clock 1.7% (20.9 σ) |
|
I tried to measure this with an actual watch, but I would like a more reliable method that an literal wall clock to measure difference between time-in-IDE versus time-in-batch (this is easy, use |
|
!bench |
|
Here are the benchmark results for commit f14c278. |
|
For sanity checking, I reproduced @bollu's benchmarks for In batch mode: In VSCode: |
|
I'm not sure how to get these files tracked on http://speed.lean-fro.org/lean4/run-detail/b07e1b73-1c45-4917-9065-41b479f55c6c. I guess that someone will need to teach it to also track |
…g inf. (#223) ### Description: This PR replaces `bv_omega` with `bv_omega_bench`, which is used to write benchmarking results into a user-specified filepath. This enables us to extract out benchmarks to be upstreamed, as begun in leanprover/lean4#5622. We make the file path, whether the benchmark run is enabled or not, and the minimum time necessary to be added to the benchmark all user-configurable parameters. ### Testing: What tests have been run? Did `make all` succeed for your changes? Was conformance testing successful on an Aarch64 machine? ### License: By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. --------- Co-authored-by: Shilpi Goel <[email protected]>
Shell: 1.59s versus VSCode: 18.5s for "instantiate metavars".
tactic execution of Lean.Parser.Tactic.omega took 4.3s
instantiate metavars took 1.97s
share common exprs took 1.59s
type checking took 1.35s
process pre-definitions took 1.85s
linting took 127ms
elaboration took 690ms
cumulative profiling times:
attribute application 0.00283ms
elaboration 690ms
fix level params 46.1ms
instantiate metavars 1.97s
linting 127ms
parsing 2.07ms
process pre-definitions 1.85s
share common exprs 1.59s
simp 18.7ms
tactic execution 4.3s
type checking 1.35s
typeclass inference 299ms
tactic execution of Lean.Parser.Tactic.omega took 4.08s
instantiate metavars took 18.5s
share common exprs took 5.04s
type checking took 1.09s
process pre-definitions took 1.14s
linting took 382ms
elaboration took 3.06s
|
I added an example that shows a fairly dramatic difference in time between the shell and VSCode: Shell: 1.59s versus VSCode: 18.5s for "instantiate metavars". tactic execution of Lean.Parser.Tactic.omega took 4.3s tactic execution of Lean.Parser.Tactic.omega took 4.08s |
No description provided.