@@ -3,8 +3,8 @@ NumExpr: Fast numerical expression evaluator for NumPy
3
3
======================================================
4
4
5
5
:Author: David M. Cooke, Francesc Alted, and others.
6
- :Maintainer: Robert A. McLeod
7
- :Contact: robbmcleod @gmail.com
6
+ :Maintainer: Francesc Alted
7
+ :Contact: faltet @gmail.com
8
8
:URL: https://github.com/pydata/numexpr
9
9
:Documentation: http://numexpr.readthedocs.io/en/latest/
10
10
:Travis CI: |travis |
@@ -24,20 +24,6 @@ NumExpr: Fast numerical expression evaluator for NumPy
24
24
.. |version | image :: https://img.shields.io/pypi/v/numexpr.png
25
25
:target: https://pypi.python.org/pypi/numexpr
26
26
27
- IMPORTANT NOTE: NumExpr is looking for maintainers!
28
- ---------------------------------------------------
29
-
30
- After 5 years as a solo maintainer (and performing a most excellent work), Robert McLeod
31
- is asking for a well deserved break. So the NumExpr project is looking for a new
32
- maintainer for a package that is used in pandas, PyTables and many other packages.
33
- If you have benefited of NumExpr capabilities in the past, and are willing to contribute
34
- back to the community, we would be happy to hear about you!
35
-
36
- We are looking for someone that is knowledgeable about compiling extensions, and that is
37
- ready to spend some cycles in making releases (2 or 3 a year, maybe even less!).
38
- Interested? just open a new ticket here and we will help you onboarding.
39
-
40
- Thank you!
41
27
42
28
What is NumExpr?
43
29
----------------
@@ -68,19 +54,19 @@ an integrated computing virtual machine. The array operands are split
68
54
into small chunks that easily fit in the cache of the CPU and passed
69
55
to the virtual machine. The virtual machine then applies the
70
56
operations on each chunk. It's worth noting that all temporaries and
71
- constants in the expression are also chunked. Chunks are distributed among
72
- the available cores of the CPU, resulting in highly parallelized code
57
+ constants in the expression are also chunked. Chunks are distributed among
58
+ the available cores of the CPU, resulting in highly parallelized code
73
59
execution.
74
60
75
61
The result is that NumExpr can get the most of your machine computing
76
62
capabilities for array-wise computations. Common speed-ups with regard
77
63
to NumPy are usually between 0.95x (for very simple expressions like
78
- :code: `'a + 1' `) and 4x (for relatively complex ones like :code: `'a*b-4.1*a > 2.5*b' `),
79
- although much higher speed-ups can be achieved for some functions and complex
64
+ :code: `'a + 1' `) and 4x (for relatively complex ones like :code: `'a*b-4.1*a > 2.5*b' `),
65
+ although much higher speed-ups can be achieved for some functions and complex
80
66
math operations (up to 15x in some cases).
81
67
82
- NumExpr performs best on matrices that are too large to fit in L1 CPU cache.
83
- In order to get a better idea on the different speed-ups that can be achieved
68
+ NumExpr performs best on matrices that are too large to fit in L1 CPU cache.
69
+ In order to get a better idea on the different speed-ups that can be achieved
84
70
on your platform, run the provided benchmarks.
85
71
86
72
Installation
@@ -89,32 +75,32 @@ Installation
89
75
From wheels
90
76
^^^^^^^^^^^
91
77
92
- NumExpr is available for install via `pip ` for a wide range of platforms and
93
- Python versions (which may be browsed at: https://pypi.org/project/numexpr/#files).
78
+ NumExpr is available for install via `pip ` for a wide range of platforms and
79
+ Python versions (which may be browsed at: https://pypi.org/project/numexpr/#files).
94
80
Installation can be performed as::
95
81
96
82
pip install numexpr
97
83
98
- If you are using the Anaconda or Miniconda distribution of Python you may prefer
84
+ If you are using the Anaconda or Miniconda distribution of Python you may prefer
99
85
to use the `conda ` package manager in this case::
100
86
101
87
conda install numexpr
102
88
103
89
From Source
104
90
^^^^^^^^^^^
105
91
106
- On most \* nix systems your compilers will already be present. However if you
92
+ On most \* nix systems your compilers will already be present. However if you
107
93
are using a virtual environment with a substantially newer version of Python than
108
94
your system Python you may be prompted to install a new version of `gcc ` or `clang `.
109
95
110
- For Windows, you will need to install the Microsoft Visual C++ Build Tools
111
- (which are free) first. The version depends on which version of Python you have
96
+ For Windows, you will need to install the Microsoft Visual C++ Build Tools
97
+ (which are free) first. The version depends on which version of Python you have
112
98
installed:
113
99
114
100
https://wiki.python.org/moin/WindowsCompilers
115
101
116
- For Python 3.6+ simply installing the latest version of MSVC build tools should
117
- be sufficient. Note that wheels found via pip do not include MKL support. Wheels
102
+ For Python 3.6+ simply installing the latest version of MSVC build tools should
103
+ be sufficient. Note that wheels found via pip do not include MKL support. Wheels
118
104
available via `conda ` will have MKL, if the MKL backend is used for NumPy.
119
105
120
106
See `requirements.txt ` for the required version of NumPy.
@@ -132,19 +118,19 @@ Do not test NumExpr in the source directory or you will generate import errors.
132
118
Enable Intel® MKL support
133
119
^^^^^^^^^^^^^^^^^^^^^^^^^
134
120
135
- NumExpr includes support for Intel's MKL library. This may provide better
136
- performance on Intel architectures, mainly when evaluating transcendental
137
- functions (trigonometrical, exponential, ...).
121
+ NumExpr includes support for Intel's MKL library. This may provide better
122
+ performance on Intel architectures, mainly when evaluating transcendental
123
+ functions (trigonometrical, exponential, ...).
138
124
139
- If you have Intel's MKL, copy the `site.cfg.example ` that comes with the
140
- distribution to `site.cfg ` and edit the latter file to provide correct paths to
141
- the MKL libraries in your system. After doing this, you can proceed with the
125
+ If you have Intel's MKL, copy the `site.cfg.example ` that comes with the
126
+ distribution to `site.cfg ` and edit the latter file to provide correct paths to
127
+ the MKL libraries in your system. After doing this, you can proceed with the
142
128
usual building instructions listed above.
143
129
144
- Pay attention to the messages during the building process in order to know
145
- whether MKL has been detected or not. Finally, you can check the speed-ups on
146
- your machine by running the `bench/vml_timing.py ` script (you can play with
147
- different parameters to the `set_vml_accuracy_mode() ` and `set_vml_num_threads() `
130
+ Pay attention to the messages during the building process in order to know
131
+ whether MKL has been detected or not. Finally, you can check the speed-ups on
132
+ your machine by running the `bench/vml_timing.py ` script (you can play with
133
+ different parameters to the `set_vml_accuracy_mode() ` and `set_vml_num_threads() `
148
134
functions in the script so as to see how it would affect performance).
149
135
150
136
Usage
0 commit comments