@@ -190,10 +190,10 @@ for you, but it expects to find the Python versions specified
190
190
in the ``asv.conf.json `` file available on the ``PATH ``. For example,
191
191
if the ``asv.conf.json `` file has::
192
192
193
- "pythons": ["2 .7", "3.6 "]
193
+ "pythons": ["3 .7", "3.12 "]
194
194
195
- then it will use the executables named ``python2 .7 `` and
196
- ``python3.6 `` on the path. There are many ways to get multiple
195
+ then it will use the executables named ``python3 .7 `` and
196
+ ``python3.12 `` on the path. There are many ways to get multiple
197
197
versions of Python installed -- your package manager, ``apt-get ``,
198
198
``yum ``, ``MacPorts `` or ``homebrew `` probably has them, or you
199
199
can also use `pyenv <https://github.com/yyuu/pyenv >`__.
@@ -215,21 +215,21 @@ Finally, the benchmarks are run::
215
215
· Fetching recent changes
216
216
· Creating environments......
217
217
· Discovering benchmarks
218
- ·· Uninstalling from virtualenv-py2 .7
219
- ·· Building 4238c44d <main> for virtualenv-py2 .7
220
- ·· Installing into virtualenv-py2 .7.
218
+ ·· Uninstalling from virtualenv-py3 .7
219
+ ·· Building 4238c44d <main> for virtualenv-py3 .7
220
+ ·· Installing into virtualenv-py3 .7.
221
221
· Running 10 total benchmarks (1 commits * 2 environments * 5 benchmarks)
222
222
[ 0.00%] · For project commit 4238c44d <main>:
223
- [ 0.00%] ·· Building for virtualenv-py2 .7.
224
- [ 0.00%] ·· Benchmarking virtualenv-py2 .7
223
+ [ 0.00%] ·· Building for virtualenv-py3 .7.
224
+ [ 0.00%] ·· Benchmarking virtualenv-py3 .7
225
225
[ 10.00%] ··· Running (benchmarks.TimeSuite.time_iterkeys--)....
226
226
[ 30.00%] ··· benchmarks.MemSuite.mem_list 2.42k
227
227
[ 35.00%] ··· benchmarks.TimeSuite.time_iterkeys 11.1±0.01μs
228
228
[ 40.00%] ··· benchmarks.TimeSuite.time_keys 11.2±0.01μs
229
229
[ 45.00%] ··· benchmarks.TimeSuite.time_range 32.9±0.01μs
230
230
[ 50.00%] ··· benchmarks.TimeSuite.time_xrange 30.3±0.01μs
231
- [ 50.00%] ·· Building for virtualenv-py3.6 ..
232
- [ 50.00%] ·· Benchmarking virtualenv-py3.6
231
+ [ 50.00%] ·· Building for virtualenv-py3.12 ..
232
+ [ 50.00%] ·· Benchmarking virtualenv-py3.12
233
233
[ 60.00%] ··· Running (benchmarks.TimeSuite.time_iterkeys--)....
234
234
[ 80.00%] ··· benchmarks.MemSuite.mem_list 2.11k
235
235
[ 85.00%] ··· benchmarks.TimeSuite.time_iterkeys failed
@@ -337,11 +337,11 @@ results from previous runs on the command line::
337
337
$ asv show main
338
338
Commit: 4238c44d <main>
339
339
340
- benchmarks.MemSuite.mem_list [mymachine/virtualenv-py2 .7]
340
+ benchmarks.MemSuite.mem_list [mymachine/virtualenv-py3 .7]
341
341
2.42k
342
342
started: 2018-08-19 18:46:47, duration: 1.00s
343
343
344
- benchmarks.TimeSuite.time_iterkeys [mymachine/virtualenv-py2 .7]
344
+ benchmarks.TimeSuite.time_iterkeys [mymachine/virtualenv-py3 .7]
345
345
11.1±0.06μs
346
346
started: 2018-08-19 18:46:47, duration: 1.00s
347
347
@@ -410,9 +410,9 @@ The ``asv rm`` command will prompt before performing any operations.
410
410
Passing the ``-y `` option will skip the prompt.
411
411
412
412
Here is a more complex example, to remove all of the benchmarks on
413
- Python 2 .7 and the machine named ``giraffe ``::
413
+ Python 3 .7 and the machine named ``giraffe ``::
414
414
415
- asv rm python=2 .7 machine=giraffe
415
+ asv rm python=3 .7 machine=giraffe
416
416
417
417
418
418
Finding a commit that produces a large regression
@@ -504,9 +504,9 @@ simple table summary of profiling results is displayed::
504
504
ncalls tottime percall cumtime percall filename:lineno(function)
505
505
1 0.000 0.000 6.844 6.844 asv/benchmark.py:171(method_caller)
506
506
1 0.000 0.000 6.844 6.844 asv/benchmark.py:197(run)
507
- 1 0.000 0.000 6.844 6.844 /usr/lib64/python2 .7/timeit.py:201(repeat)
508
- 3 0.000 0.000 6.844 2.281 /usr/lib64/python2 .7/timeit.py:178(timeit)
509
- 3 0.104 0.035 6.844 2.281 /usr/lib64/python2 .7/timeit.py:96(inner)
507
+ 1 0.000 0.000 6.844 6.844 /usr/lib64/python3 .7/timeit.py:201(repeat)
508
+ 3 0.000 0.000 6.844 2.281 /usr/lib64/python3 .7/timeit.py:178(timeit)
509
+ 3 0.104 0.035 6.844 2.281 /usr/lib64/python3 .7/timeit.py:96(inner)
510
510
300000 0.398 0.000 6.740 0.000 benchmarks/time_units.py:20(time_very_simple_unit_parse)
511
511
300000 1.550 0.000 6.342 0.000 astropy/units/core.py:1673(__call__)
512
512
300000 0.495 0.000 2.416 0.000 astropy/units/format/generic.py:361(parse)
@@ -516,7 +516,7 @@ simple table summary of profiling results is displayed::
516
516
3000002 0.735 0.000 0.735 0.000 {isinstance}
517
517
300000 0.403 0.000 0.403 0.000 {method 'decode' of 'str' objects}
518
518
300000 0.216 0.000 0.216 0.000 astropy/units/format/generic.py:32(__init__)
519
- 300000 0.152 0.000 0.188 0.000 /usr/lib64/python2 .7/inspect.py:59(isclass)
519
+ 300000 0.152 0.000 0.188 0.000 /usr/lib64/python3 .7/inspect.py:59(isclass)
520
520
900000 0.170 0.000 0.170 0.000 {method 'lower' of 'unicode' objects}
521
521
300000 0.133 0.000 0.133 0.000 {method 'count' of 'unicode' objects}
522
522
300000 0.078 0.000 0.078 0.000 astropy/units/core.py:272(get_current_unit_registry)
@@ -525,13 +525,13 @@ simple table summary of profiling results is displayed::
525
525
300000 0.038 0.000 0.038 0.000 {method 'strip' of 'str' objects}
526
526
300003 0.037 0.000 0.037 0.000 {globals}
527
527
300000 0.033 0.000 0.033 0.000 {len}
528
- 3 0.000 0.000 0.000 0.000 /usr/lib64/python2 .7/timeit.py:143(setup)
529
- 1 0.000 0.000 0.000 0.000 /usr/lib64/python2 .7/timeit.py:121(__init__)
528
+ 3 0.000 0.000 0.000 0.000 /usr/lib64/python3 .7/timeit.py:143(setup)
529
+ 1 0.000 0.000 0.000 0.000 /usr/lib64/python3 .7/timeit.py:121(__init__)
530
530
6 0.000 0.000 0.000 0.000 {time.time}
531
531
1 0.000 0.000 0.000 0.000 {min}
532
532
1 0.000 0.000 0.000 0.000 {range}
533
533
1 0.000 0.000 0.000 0.000 {hasattr}
534
- 1 0.000 0.000 0.000 0.000 /usr/lib64/python2 .7/timeit.py:94(_template_func)
534
+ 1 0.000 0.000 0.000 0.000 /usr/lib64/python3 .7/timeit.py:94(_template_func)
535
535
3 0.000 0.000 0.000 0.000 {gc.enable}
536
536
3 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects}
537
537
3 0.000 0.000 0.000 0.000 {gc.disable}
@@ -590,16 +590,16 @@ revisions of the project. You can do so with the ``compare`` command::
590
590
before after ratio
591
591
[3bfda9c6] [bf719488]
592
592
<v0.1> <v0.2>
593
- 40.4m 40.4m 1.00 benchmarks.MemSuite.mem_list [amulet.localdomain/virtualenv-py2 .7-numpy]
594
- failed 35.2m n/a benchmarks.MemSuite.mem_list [amulet.localdomain/virtualenv-py3.6 -numpy]
595
- 11.5±0.08μs 11.0±0μs 0.96 benchmarks.TimeSuite.time_iterkeys [amulet.localdomain/virtualenv-py2 .7-numpy]
596
- failed failed n/a benchmarks.TimeSuite.time_iterkeys [amulet.localdomain/virtualenv-py3.6 -numpy]
597
- 11.5±1μs 11.2±0.02μs 0.97 benchmarks.TimeSuite.time_keys [amulet.localdomain/virtualenv-py2 .7-numpy]
598
- failed 8.40±0.02μs n/a benchmarks.TimeSuite.time_keys [amulet.localdomain/virtualenv-py3.6 -numpy]
599
- 34.6±0.09μs 32.9±0.01μs 0.95 benchmarks.TimeSuite.time_range [amulet.localdomain/virtualenv-py2 .7-numpy]
600
- failed 35.6±0.05μs n/a benchmarks.TimeSuite.time_range [amulet.localdomain/virtualenv-py3.6 -numpy]
601
- 31.6±0.1μs 30.2±0.02μs 0.95 benchmarks.TimeSuite.time_xrange [amulet.localdomain/virtualenv-py2 .7-numpy]
602
- failed failed n/a benchmarks.TimeSuite.time_xrange [amulet.localdomain/virtualenv-py3.6 -numpy]
593
+ 40.4m 40.4m 1.00 benchmarks.MemSuite.mem_list [amulet.localdomain/virtualenv-py3 .7-numpy]
594
+ failed 35.2m n/a benchmarks.MemSuite.mem_list [amulet.localdomain/virtualenv-py3.12 -numpy]
595
+ 11.5±0.08μs 11.0±0μs 0.96 benchmarks.TimeSuite.time_iterkeys [amulet.localdomain/virtualenv-py3 .7-numpy]
596
+ failed failed n/a benchmarks.TimeSuite.time_iterkeys [amulet.localdomain/virtualenv-py3.12 -numpy]
597
+ 11.5±1μs 11.2±0.02μs 0.97 benchmarks.TimeSuite.time_keys [amulet.localdomain/virtualenv-py3 .7-numpy]
598
+ failed 8.40±0.02μs n/a benchmarks.TimeSuite.time_keys [amulet.localdomain/virtualenv-py3.12 -numpy]
599
+ 34.6±0.09μs 32.9±0.01μs 0.95 benchmarks.TimeSuite.time_range [amulet.localdomain/virtualenv-py3 .7-numpy]
600
+ failed 35.6±0.05μs n/a benchmarks.TimeSuite.time_range [amulet.localdomain/virtualenv-py3.12 -numpy]
601
+ 31.6±0.1μs 30.2±0.02μs 0.95 benchmarks.TimeSuite.time_xrange [amulet.localdomain/virtualenv-py3 .7-numpy]
602
+ failed failed n/a benchmarks.TimeSuite.time_xrange [amulet.localdomain/virtualenv-py3.12 -numpy]
603
603
604
604
This will show the times for each benchmark for the first and second
605
605
revision, and the ratio of the second to the first. In addition, the
0 commit comments