@@ -190,10 +190,10 @@ for you, but it expects to find the Python versions specified
190
190
in the ``asv.conf.json `` file available on the ``PATH ``. For example,
191
191
if the ``asv.conf.json `` file has::
192
192
193
- "pythons": ["2 .7", "3.6 "]
193
+ "pythons": ["3 .7", "3.12 "]
194
194
195
- then it will use the executables named ``python2 .7 `` and
196
- ``python3.6 `` on the path. There are many ways to get multiple
195
+ then it will use the executables named ``python3 .7 `` and
196
+ ``python3.12 `` on the path. There are many ways to get multiple
197
197
versions of Python installed -- your package manager, ``apt-get ``,
198
198
``yum ``, ``MacPorts `` or ``homebrew `` probably has them, or you
199
199
can also use `pyenv <https://github.com/yyuu/pyenv >`__.
@@ -215,21 +215,21 @@ Finally, the benchmarks are run::
215
215
· Fetching recent changes
216
216
· Creating environments......
217
217
· Discovering benchmarks
218
- ·· Uninstalling from virtualenv-py2 .7
219
- ·· Building 4238c44d <main> for virtualenv-py2 .7
220
- ·· Installing into virtualenv-py2 .7.
218
+ ·· Uninstalling from virtualenv-py3 .7
219
+ ·· Building 4238c44d <main> for virtualenv-py3 .7
220
+ ·· Installing into virtualenv-py3 .7.
221
221
· Running 10 total benchmarks (1 commits * 2 environments * 5 benchmarks)
222
222
[ 0.00%] · For project commit 4238c44d <main>:
223
- [ 0.00%] ·· Building for virtualenv-py2 .7.
224
- [ 0.00%] ·· Benchmarking virtualenv-py2 .7
223
+ [ 0.00%] ·· Building for virtualenv-py3 .7.
224
+ [ 0.00%] ·· Benchmarking virtualenv-py3 .7
225
225
[ 10.00%] ··· Running (benchmarks.TimeSuite.time_iterkeys--)....
226
226
[ 30.00%] ··· benchmarks.MemSuite.mem_list 2.42k
227
227
[ 35.00%] ··· benchmarks.TimeSuite.time_iterkeys 11.1±0.01μs
228
228
[ 40.00%] ··· benchmarks.TimeSuite.time_keys 11.2±0.01μs
229
229
[ 45.00%] ··· benchmarks.TimeSuite.time_range 32.9±0.01μs
230
230
[ 50.00%] ··· benchmarks.TimeSuite.time_xrange 30.3±0.01μs
231
- [ 50.00%] ·· Building for virtualenv-py3.6 ..
232
- [ 50.00%] ·· Benchmarking virtualenv-py3.6
231
+ [ 50.00%] ·· Building for virtualenv-py3.12 ..
232
+ [ 50.00%] ·· Benchmarking virtualenv-py3.12
233
233
[ 60.00%] ··· Running (benchmarks.TimeSuite.time_iterkeys--)....
234
234
[ 80.00%] ··· benchmarks.MemSuite.mem_list 2.11k
235
235
[ 85.00%] ··· benchmarks.TimeSuite.time_iterkeys failed
@@ -333,11 +333,11 @@ results from previous runs on the command line::
333
333
$ asv show main
334
334
Commit: 4238c44d <main>
335
335
336
- benchmarks.MemSuite.mem_list [mymachine/virtualenv-py2 .7]
336
+ benchmarks.MemSuite.mem_list [mymachine/virtualenv-py3 .7]
337
337
2.42k
338
338
started: 2018-08-19 18:46:47, duration: 1.00s
339
339
340
- benchmarks.TimeSuite.time_iterkeys [mymachine/virtualenv-py2 .7]
340
+ benchmarks.TimeSuite.time_iterkeys [mymachine/virtualenv-py3 .7]
341
341
11.1±0.06μs
342
342
started: 2018-08-19 18:46:47, duration: 1.00s
343
343
@@ -406,9 +406,9 @@ The ``asv rm`` command will prompt before performing any operations.
406
406
Passing the ``-y `` option will skip the prompt.
407
407
408
408
Here is a more complex example, to remove all of the benchmarks on
409
- Python 2 .7 and the machine named ``giraffe ``::
409
+ Python 3 .7 and the machine named ``giraffe ``::
410
410
411
- asv rm python=2 .7 machine=giraffe
411
+ asv rm python=3 .7 machine=giraffe
412
412
413
413
414
414
Finding a commit that produces a large regression
@@ -500,9 +500,9 @@ simple table summary of profiling results is displayed::
500
500
ncalls tottime percall cumtime percall filename:lineno(function)
501
501
1 0.000 0.000 6.844 6.844 asv/benchmark.py:171(method_caller)
502
502
1 0.000 0.000 6.844 6.844 asv/benchmark.py:197(run)
503
- 1 0.000 0.000 6.844 6.844 /usr/lib64/python2 .7/timeit.py:201(repeat)
504
- 3 0.000 0.000 6.844 2.281 /usr/lib64/python2 .7/timeit.py:178(timeit)
505
- 3 0.104 0.035 6.844 2.281 /usr/lib64/python2 .7/timeit.py:96(inner)
503
+ 1 0.000 0.000 6.844 6.844 /usr/lib64/python3 .7/timeit.py:201(repeat)
504
+ 3 0.000 0.000 6.844 2.281 /usr/lib64/python3 .7/timeit.py:178(timeit)
505
+ 3 0.104 0.035 6.844 2.281 /usr/lib64/python3 .7/timeit.py:96(inner)
506
506
300000 0.398 0.000 6.740 0.000 benchmarks/time_units.py:20(time_very_simple_unit_parse)
507
507
300000 1.550 0.000 6.342 0.000 astropy/units/core.py:1673(__call__)
508
508
300000 0.495 0.000 2.416 0.000 astropy/units/format/generic.py:361(parse)
@@ -512,7 +512,7 @@ simple table summary of profiling results is displayed::
512
512
3000002 0.735 0.000 0.735 0.000 {isinstance}
513
513
300000 0.403 0.000 0.403 0.000 {method 'decode' of 'str' objects}
514
514
300000 0.216 0.000 0.216 0.000 astropy/units/format/generic.py:32(__init__)
515
- 300000 0.152 0.000 0.188 0.000 /usr/lib64/python2 .7/inspect.py:59(isclass)
515
+ 300000 0.152 0.000 0.188 0.000 /usr/lib64/python3 .7/inspect.py:59(isclass)
516
516
900000 0.170 0.000 0.170 0.000 {method 'lower' of 'unicode' objects}
517
517
300000 0.133 0.000 0.133 0.000 {method 'count' of 'unicode' objects}
518
518
300000 0.078 0.000 0.078 0.000 astropy/units/core.py:272(get_current_unit_registry)
@@ -521,13 +521,13 @@ simple table summary of profiling results is displayed::
521
521
300000 0.038 0.000 0.038 0.000 {method 'strip' of 'str' objects}
522
522
300003 0.037 0.000 0.037 0.000 {globals}
523
523
300000 0.033 0.000 0.033 0.000 {len}
524
- 3 0.000 0.000 0.000 0.000 /usr/lib64/python2 .7/timeit.py:143(setup)
525
- 1 0.000 0.000 0.000 0.000 /usr/lib64/python2 .7/timeit.py:121(__init__)
524
+ 3 0.000 0.000 0.000 0.000 /usr/lib64/python3 .7/timeit.py:143(setup)
525
+ 1 0.000 0.000 0.000 0.000 /usr/lib64/python3 .7/timeit.py:121(__init__)
526
526
6 0.000 0.000 0.000 0.000 {time.time}
527
527
1 0.000 0.000 0.000 0.000 {min}
528
528
1 0.000 0.000 0.000 0.000 {range}
529
529
1 0.000 0.000 0.000 0.000 {hasattr}
530
- 1 0.000 0.000 0.000 0.000 /usr/lib64/python2 .7/timeit.py:94(_template_func)
530
+ 1 0.000 0.000 0.000 0.000 /usr/lib64/python3 .7/timeit.py:94(_template_func)
531
531
3 0.000 0.000 0.000 0.000 {gc.enable}
532
532
3 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects}
533
533
3 0.000 0.000 0.000 0.000 {gc.disable}
@@ -586,16 +586,16 @@ revisions of the project. You can do so with the ``compare`` command::
586
586
before after ratio
587
587
[3bfda9c6] [bf719488]
588
588
<v0.1> <v0.2>
589
- 40.4m 40.4m 1.00 benchmarks.MemSuite.mem_list [amulet.localdomain/virtualenv-py2 .7-numpy]
590
- failed 35.2m n/a benchmarks.MemSuite.mem_list [amulet.localdomain/virtualenv-py3.6 -numpy]
591
- 11.5±0.08μs 11.0±0μs 0.96 benchmarks.TimeSuite.time_iterkeys [amulet.localdomain/virtualenv-py2 .7-numpy]
592
- failed failed n/a benchmarks.TimeSuite.time_iterkeys [amulet.localdomain/virtualenv-py3.6 -numpy]
593
- 11.5±1μs 11.2±0.02μs 0.97 benchmarks.TimeSuite.time_keys [amulet.localdomain/virtualenv-py2 .7-numpy]
594
- failed 8.40±0.02μs n/a benchmarks.TimeSuite.time_keys [amulet.localdomain/virtualenv-py3.6 -numpy]
595
- 34.6±0.09μs 32.9±0.01μs 0.95 benchmarks.TimeSuite.time_range [amulet.localdomain/virtualenv-py2 .7-numpy]
596
- failed 35.6±0.05μs n/a benchmarks.TimeSuite.time_range [amulet.localdomain/virtualenv-py3.6 -numpy]
597
- 31.6±0.1μs 30.2±0.02μs 0.95 benchmarks.TimeSuite.time_xrange [amulet.localdomain/virtualenv-py2 .7-numpy]
598
- failed failed n/a benchmarks.TimeSuite.time_xrange [amulet.localdomain/virtualenv-py3.6 -numpy]
589
+ 40.4m 40.4m 1.00 benchmarks.MemSuite.mem_list [amulet.localdomain/virtualenv-py3 .7-numpy]
590
+ failed 35.2m n/a benchmarks.MemSuite.mem_list [amulet.localdomain/virtualenv-py3.12 -numpy]
591
+ 11.5±0.08μs 11.0±0μs 0.96 benchmarks.TimeSuite.time_iterkeys [amulet.localdomain/virtualenv-py3 .7-numpy]
592
+ failed failed n/a benchmarks.TimeSuite.time_iterkeys [amulet.localdomain/virtualenv-py3.12 -numpy]
593
+ 11.5±1μs 11.2±0.02μs 0.97 benchmarks.TimeSuite.time_keys [amulet.localdomain/virtualenv-py3 .7-numpy]
594
+ failed 8.40±0.02μs n/a benchmarks.TimeSuite.time_keys [amulet.localdomain/virtualenv-py3.12 -numpy]
595
+ 34.6±0.09μs 32.9±0.01μs 0.95 benchmarks.TimeSuite.time_range [amulet.localdomain/virtualenv-py3 .7-numpy]
596
+ failed 35.6±0.05μs n/a benchmarks.TimeSuite.time_range [amulet.localdomain/virtualenv-py3.12 -numpy]
597
+ 31.6±0.1μs 30.2±0.02μs 0.95 benchmarks.TimeSuite.time_xrange [amulet.localdomain/virtualenv-py3 .7-numpy]
598
+ failed failed n/a benchmarks.TimeSuite.time_xrange [amulet.localdomain/virtualenv-py3.12 -numpy]
599
599
600
600
This will show the times for each benchmark for the first and second
601
601
revision, and the ratio of the second to the first. In addition, the
0 commit comments