Skip to content

Commit 0a6ea6e

Browse files
authored
Merge pull request #1367 from dstansby/conf-py-ver
DOC: Remove mentions of Python 2.7
2 parents 37bb44b + 0f387d9 commit 0a6ea6e

File tree

5 files changed

+52
-52
lines changed

5 files changed

+52
-52
lines changed

asv.conf.json

+4-4
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@
4343

4444
// The Pythons you'd like to test against. If not provided, defaults
4545
// to the current version of Python used to run `asv`.
46-
"pythons": ["2.7"],
46+
"pythons": ["3.12"],
4747

4848
// The matrix of dependencies to test. Each key is the name of a
4949
// package (in PyPI) and the values are version numbers. An empty
@@ -86,10 +86,10 @@
8686
// ],
8787
//
8888
// "include": [
89-
// // additional env for python2.7
90-
// {"python": "2.7", "numpy": "1.8"},
89+
// // additional env for python3.12
90+
// {"python": "3.12", "numpy": "1.26"},
9191
// // additional env if run on windows+conda
92-
// {"platform": "win32", "environment_type": "conda", "python": "2.7", "libpython": ""},
92+
// {"platform": "win32", "environment_type": "conda", "python": "3.12", "libpython": ""},
9393
// ],
9494

9595
// The directory (relative to the current directory) that benchmarks are

asv/commands/common_args.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -213,7 +213,7 @@ def __call__(self, parser, namespace, values, option_string=None):
213213
def add_environment(parser, default_same=False):
214214
help = """Specify the environment and Python versions for running the
215215
benchmarks. String of the format 'environment_type:python_version',
216-
for example 'conda:2.7'. If the Python version is not specified,
216+
for example 'conda:3.12'. If the Python version is not specified,
217217
all those listed in the configuration file are run. The special
218218
environment type 'existing:/path/to/python' runs the benchmarks
219219
using the given Python interpreter; if the path is omitted,

asv/template/asv.conf.json

+4-4
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@
6565

6666
// The Pythons you'd like to test against. If not provided, defaults
6767
// to the current version of Python used to run `asv`.
68-
// "pythons": ["2.7", "3.8"],
68+
// "pythons": ["3.8", "3.12"],
6969

7070
// The list of conda channel names to be searched for benchmark
7171
// dependency packages in the specified order
@@ -138,10 +138,10 @@
138138
// ],
139139
//
140140
// "include": [
141-
// // additional env for python2.7
142-
// {"python": "2.7", "req": {"numpy": "1.8"}, "env_nobuild": {"FOO": "123"}},
141+
// // additional env for python3.12
142+
// {"python": "3.12", "req": {"numpy": "1.26"}, "env_nobuild": {"FOO": "123"}},
143143
// // additional env if run on windows+conda
144-
// {"platform": "win32", "environment_type": "conda", "python": "2.7", "req": {"libpython": ""}},
144+
// {"platform": "win32", "environment_type": "conda", "python": "3.12", "req": {"libpython": ""}},
145145
// ],
146146

147147
// The directory (relative to the current directory) that benchmarks are

docs/source/asv.conf.json.rst

+12-12
Original file line numberDiff line numberDiff line change
@@ -226,7 +226,7 @@ If provided, it must be a dictionary, containing some of the keys
226226

227227
"matrix": {
228228
"req": {
229-
"numpy": ["1.7", "1.8"],
229+
"numpy": ["1.25", "1.26"],
230230
"Cython": []
231231
"six": ["", null]
232232
},
@@ -249,7 +249,7 @@ version and not installed at all::
249249

250250
"matrix": {
251251
"req": {
252-
"numpy": ["1.7", "1.8"],
252+
"numpy": ["1.25", "1.26"],
253253
"Cython": []
254254
"six": ["", null],
255255
}
@@ -351,14 +351,14 @@ For example::
351351
"pythons": ["3.8", "3.9"],
352352
"matrix": {
353353
"req": {
354-
"numpy": ["1.7", "1.8"],
354+
"numpy": ["1.25", "1.26"],
355355
"Cython": ["", null],
356356
"colorama": ["", null]
357357
},
358358
"env": {"FOO": ["1", "2"]},
359359
},
360360
"exclude": [
361-
{"python": "3.8", "req": {"numpy": "1.7"}},
361+
{"python": "3.8", "req": {"numpy": "1.25"}},
362362
{"sys_platform": "(?!win32).*", "req": {"colorama": ""}},
363363
{"sys_platform": "win32", "req": {"colorama": null}},
364364
{"env": {"FOO": "1"}},
@@ -368,12 +368,12 @@ This will generate all combinations of Python version and items in the
368368
matrix, except those with Python 3.8 and Numpy 3.9. In other words,
369369
the combinations::
370370

371-
python==3.8 numpy==1.8 Cython==latest (colorama==latest) FOO=2
372-
python==3.8 numpy==1.8 (colorama==latest) FOO=2
373-
python==3.9 numpy==1.7 Cython==latest (colorama==latest) FOO=2
374-
python==3.9 numpy==1.7 (colorama==latest) FOO=2
375-
python==3.9 numpy==1.8 Cython==latest (colorama==latest) FOO=2
376-
python==3.9 numpy==1.8 (colorama==latest) FOO=2
371+
python==3.8 numpy==1.26 Cython==latest (colorama==latest) FOO=2
372+
python==3.8 numpy==1.26 (colorama==latest) FOO=2
373+
python==3.9 numpy==1.25 Cython==latest (colorama==latest) FOO=2
374+
python==3.9 numpy==1.25 (colorama==latest) FOO=2
375+
python==3.9 numpy==1.26 Cython==latest (colorama==latest) FOO=2
376+
python==3.9 numpy==1.26 (colorama==latest) FOO=2
377377

378378
The ``colorama`` package will be installed only if the current
379379
platform is Windows.
@@ -402,9 +402,9 @@ The exclude rules are not applied to includes.
402402
For example::
403403

404404
"include": [
405-
{"python": "3.9", "req": {"numpy": "1.8.2"}, "env": {"FOO": "true"}},
405+
{"python": "3.9", "req": {"numpy": "1.26"}, "env": {"FOO": "true"}},
406406
{"platform": "win32", "environment_type": "conda",
407-
"req": {"python": "2.7", "libpython": ""}}
407+
"req": {"python": "3.12", "libpython": ""}}
408408
]
409409

410410
This corresponds to two additional environments. One runs on Python 3.9

docs/source/using.rst

+31-31
Original file line numberDiff line numberDiff line change
@@ -190,10 +190,10 @@ for you, but it expects to find the Python versions specified
190190
in the ``asv.conf.json`` file available on the ``PATH``. For example,
191191
if the ``asv.conf.json`` file has::
192192

193-
"pythons": ["2.7", "3.6"]
193+
"pythons": ["3.7", "3.12"]
194194

195-
then it will use the executables named ``python2.7`` and
196-
``python3.6`` on the path. There are many ways to get multiple
195+
then it will use the executables named ``python3.7`` and
196+
``python3.12`` on the path. There are many ways to get multiple
197197
versions of Python installed -- your package manager, ``apt-get``,
198198
``yum``, ``MacPorts`` or ``homebrew`` probably has them, or you
199199
can also use `pyenv <https://github.com/yyuu/pyenv>`__.
@@ -215,21 +215,21 @@ Finally, the benchmarks are run::
215215
· Fetching recent changes
216216
· Creating environments......
217217
· Discovering benchmarks
218-
·· Uninstalling from virtualenv-py2.7
219-
·· Building 4238c44d <main> for virtualenv-py2.7
220-
·· Installing into virtualenv-py2.7.
218+
·· Uninstalling from virtualenv-py3.7
219+
·· Building 4238c44d <main> for virtualenv-py3.7
220+
·· Installing into virtualenv-py3.7.
221221
· Running 10 total benchmarks (1 commits * 2 environments * 5 benchmarks)
222222
[ 0.00%] · For project commit 4238c44d <main>:
223-
[ 0.00%] ·· Building for virtualenv-py2.7.
224-
[ 0.00%] ·· Benchmarking virtualenv-py2.7
223+
[ 0.00%] ·· Building for virtualenv-py3.7.
224+
[ 0.00%] ·· Benchmarking virtualenv-py3.7
225225
[ 10.00%] ··· Running (benchmarks.TimeSuite.time_iterkeys--)....
226226
[ 30.00%] ··· benchmarks.MemSuite.mem_list 2.42k
227227
[ 35.00%] ··· benchmarks.TimeSuite.time_iterkeys 11.1±0.01μs
228228
[ 40.00%] ··· benchmarks.TimeSuite.time_keys 11.2±0.01μs
229229
[ 45.00%] ··· benchmarks.TimeSuite.time_range 32.9±0.01μs
230230
[ 50.00%] ··· benchmarks.TimeSuite.time_xrange 30.3±0.01μs
231-
[ 50.00%] ·· Building for virtualenv-py3.6..
232-
[ 50.00%] ·· Benchmarking virtualenv-py3.6
231+
[ 50.00%] ·· Building for virtualenv-py3.12..
232+
[ 50.00%] ·· Benchmarking virtualenv-py3.12
233233
[ 60.00%] ··· Running (benchmarks.TimeSuite.time_iterkeys--)....
234234
[ 80.00%] ··· benchmarks.MemSuite.mem_list 2.11k
235235
[ 85.00%] ··· benchmarks.TimeSuite.time_iterkeys failed
@@ -333,11 +333,11 @@ results from previous runs on the command line::
333333
$ asv show main
334334
Commit: 4238c44d <main>
335335

336-
benchmarks.MemSuite.mem_list [mymachine/virtualenv-py2.7]
336+
benchmarks.MemSuite.mem_list [mymachine/virtualenv-py3.7]
337337
2.42k
338338
started: 2018-08-19 18:46:47, duration: 1.00s
339339

340-
benchmarks.TimeSuite.time_iterkeys [mymachine/virtualenv-py2.7]
340+
benchmarks.TimeSuite.time_iterkeys [mymachine/virtualenv-py3.7]
341341
11.1±0.06μs
342342
started: 2018-08-19 18:46:47, duration: 1.00s
343343

@@ -406,9 +406,9 @@ The ``asv rm`` command will prompt before performing any operations.
406406
Passing the ``-y`` option will skip the prompt.
407407

408408
Here is a more complex example, to remove all of the benchmarks on
409-
Python 2.7 and the machine named ``giraffe``::
409+
Python 3.7 and the machine named ``giraffe``::
410410

411-
asv rm python=2.7 machine=giraffe
411+
asv rm python=3.7 machine=giraffe
412412

413413

414414
Finding a commit that produces a large regression
@@ -500,9 +500,9 @@ simple table summary of profiling results is displayed::
500500
ncalls tottime percall cumtime percall filename:lineno(function)
501501
1 0.000 0.000 6.844 6.844 asv/benchmark.py:171(method_caller)
502502
1 0.000 0.000 6.844 6.844 asv/benchmark.py:197(run)
503-
1 0.000 0.000 6.844 6.844 /usr/lib64/python2.7/timeit.py:201(repeat)
504-
3 0.000 0.000 6.844 2.281 /usr/lib64/python2.7/timeit.py:178(timeit)
505-
3 0.104 0.035 6.844 2.281 /usr/lib64/python2.7/timeit.py:96(inner)
503+
1 0.000 0.000 6.844 6.844 /usr/lib64/python3.7/timeit.py:201(repeat)
504+
3 0.000 0.000 6.844 2.281 /usr/lib64/python3.7/timeit.py:178(timeit)
505+
3 0.104 0.035 6.844 2.281 /usr/lib64/python3.7/timeit.py:96(inner)
506506
300000 0.398 0.000 6.740 0.000 benchmarks/time_units.py:20(time_very_simple_unit_parse)
507507
300000 1.550 0.000 6.342 0.000 astropy/units/core.py:1673(__call__)
508508
300000 0.495 0.000 2.416 0.000 astropy/units/format/generic.py:361(parse)
@@ -512,7 +512,7 @@ simple table summary of profiling results is displayed::
512512
3000002 0.735 0.000 0.735 0.000 {isinstance}
513513
300000 0.403 0.000 0.403 0.000 {method 'decode' of 'str' objects}
514514
300000 0.216 0.000 0.216 0.000 astropy/units/format/generic.py:32(__init__)
515-
300000 0.152 0.000 0.188 0.000 /usr/lib64/python2.7/inspect.py:59(isclass)
515+
300000 0.152 0.000 0.188 0.000 /usr/lib64/python3.7/inspect.py:59(isclass)
516516
900000 0.170 0.000 0.170 0.000 {method 'lower' of 'unicode' objects}
517517
300000 0.133 0.000 0.133 0.000 {method 'count' of 'unicode' objects}
518518
300000 0.078 0.000 0.078 0.000 astropy/units/core.py:272(get_current_unit_registry)
@@ -521,13 +521,13 @@ simple table summary of profiling results is displayed::
521521
300000 0.038 0.000 0.038 0.000 {method 'strip' of 'str' objects}
522522
300003 0.037 0.000 0.037 0.000 {globals}
523523
300000 0.033 0.000 0.033 0.000 {len}
524-
3 0.000 0.000 0.000 0.000 /usr/lib64/python2.7/timeit.py:143(setup)
525-
1 0.000 0.000 0.000 0.000 /usr/lib64/python2.7/timeit.py:121(__init__)
524+
3 0.000 0.000 0.000 0.000 /usr/lib64/python3.7/timeit.py:143(setup)
525+
1 0.000 0.000 0.000 0.000 /usr/lib64/python3.7/timeit.py:121(__init__)
526526
6 0.000 0.000 0.000 0.000 {time.time}
527527
1 0.000 0.000 0.000 0.000 {min}
528528
1 0.000 0.000 0.000 0.000 {range}
529529
1 0.000 0.000 0.000 0.000 {hasattr}
530-
1 0.000 0.000 0.000 0.000 /usr/lib64/python2.7/timeit.py:94(_template_func)
530+
1 0.000 0.000 0.000 0.000 /usr/lib64/python3.7/timeit.py:94(_template_func)
531531
3 0.000 0.000 0.000 0.000 {gc.enable}
532532
3 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects}
533533
3 0.000 0.000 0.000 0.000 {gc.disable}
@@ -586,16 +586,16 @@ revisions of the project. You can do so with the ``compare`` command::
586586
before after ratio
587587
[3bfda9c6] [bf719488]
588588
<v0.1> <v0.2>
589-
40.4m 40.4m 1.00 benchmarks.MemSuite.mem_list [amulet.localdomain/virtualenv-py2.7-numpy]
590-
failed 35.2m n/a benchmarks.MemSuite.mem_list [amulet.localdomain/virtualenv-py3.6-numpy]
591-
11.5±0.08μs 11.0±0μs 0.96 benchmarks.TimeSuite.time_iterkeys [amulet.localdomain/virtualenv-py2.7-numpy]
592-
failed failed n/a benchmarks.TimeSuite.time_iterkeys [amulet.localdomain/virtualenv-py3.6-numpy]
593-
11.5±1μs 11.2±0.02μs 0.97 benchmarks.TimeSuite.time_keys [amulet.localdomain/virtualenv-py2.7-numpy]
594-
failed 8.40±0.02μs n/a benchmarks.TimeSuite.time_keys [amulet.localdomain/virtualenv-py3.6-numpy]
595-
34.6±0.09μs 32.9±0.01μs 0.95 benchmarks.TimeSuite.time_range [amulet.localdomain/virtualenv-py2.7-numpy]
596-
failed 35.6±0.05μs n/a benchmarks.TimeSuite.time_range [amulet.localdomain/virtualenv-py3.6-numpy]
597-
31.6±0.1μs 30.2±0.02μs 0.95 benchmarks.TimeSuite.time_xrange [amulet.localdomain/virtualenv-py2.7-numpy]
598-
failed failed n/a benchmarks.TimeSuite.time_xrange [amulet.localdomain/virtualenv-py3.6-numpy]
589+
40.4m 40.4m 1.00 benchmarks.MemSuite.mem_list [amulet.localdomain/virtualenv-py3.7-numpy]
590+
failed 35.2m n/a benchmarks.MemSuite.mem_list [amulet.localdomain/virtualenv-py3.12-numpy]
591+
11.5±0.08μs 11.0±0μs 0.96 benchmarks.TimeSuite.time_iterkeys [amulet.localdomain/virtualenv-py3.7-numpy]
592+
failed failed n/a benchmarks.TimeSuite.time_iterkeys [amulet.localdomain/virtualenv-py3.12-numpy]
593+
11.5±1μs 11.2±0.02μs 0.97 benchmarks.TimeSuite.time_keys [amulet.localdomain/virtualenv-py3.7-numpy]
594+
failed 8.40±0.02μs n/a benchmarks.TimeSuite.time_keys [amulet.localdomain/virtualenv-py3.12-numpy]
595+
34.6±0.09μs 32.9±0.01μs 0.95 benchmarks.TimeSuite.time_range [amulet.localdomain/virtualenv-py3.7-numpy]
596+
failed 35.6±0.05μs n/a benchmarks.TimeSuite.time_range [amulet.localdomain/virtualenv-py3.12-numpy]
597+
31.6±0.1μs 30.2±0.02μs 0.95 benchmarks.TimeSuite.time_xrange [amulet.localdomain/virtualenv-py3.7-numpy]
598+
failed failed n/a benchmarks.TimeSuite.time_xrange [amulet.localdomain/virtualenv-py3.12-numpy]
599599

600600
This will show the times for each benchmark for the first and second
601601
revision, and the ratio of the second to the first. In addition, the

0 commit comments

Comments
 (0)