Releases: Blosc/python-blosc2
Release 3.3.1
Changes from 3.3.0 to 3.3.1
-
In our effort to better adapt to better adapt to the array API
(https://data-apis.org/array-api/latest/), we have introduced
permute_dims() and matrix_transpose() functions, and the .T property.
This replaces to previous transpose() function, which is now deprecated.
See PR #384. Thanks to Ricardo Sales Piquer (@ricardosp4). -
Constructors like
arange()
,linspace()
andfromiter()
now
use far less memory when creating large arrays. As an example, a 5 TB
array of 8-byte floats now uses less than 200 MB of memory instead of
170 GB previously. See PR #387. -
Now, when opening a lazy expression with
blosc2.open()
, and there is
a missing operand, the open still works, but the dtype and shape
attributes are None. This is useful for lazy expressions that have
lost some operands, but you still want to open them for inspection.
See PR #385. -
Added an example of getting a slice out of a C2Array.
Release 3.3.0
Changes from 3.2.1 to 3.3.0
-
New
blosc2.transpose()
function for transposing 2D NDArray instances
natively. See PR #375 and docs at
https://www.blosc.org/python-blosc2/reference/autofiles/operations_with_arrays/blosc2.transpose.html#blosc2.transpose
See also our new blog about this: https://www.blosc.org/posts/transpose-compressed-matrices/
Thanks to Ricardo Sales Piquer (@ricardosp4) for the implementation. -
New fast path for
NDArray.slice()
for getting slices that are aligned with
underlying chunks. This is a common operation when working with NDArray
instances, and now it is up to 40x faster in our benchmarks (see PR #380). -
Returned
NDArray
object inNDarray.slice()
now defaults to original
codec/clevel/filters. The previous behavior was to use the default
codec/clevel/filters. See PR #378. Thanks to Luke Shaw (@lshaw8317). -
Several English edits in the documentation. Thanks to Luke Shaw (@lshaw8317)
for his help in this area.
Release 3.2.1
Changes from 3.2.0 to 3.2.1
-
The array containers are now using the
__array_interface__
protocol to
expose the data in the array. This allows for better interoperability with
other libraries that support the__array_interface__
protocol, like NumPy,
CuPy, etc. Now, the range of functions that can be used within theblosc2.jit
decorator is way larger, and essentially all NumPy functions should work now.See examples at: https://github.com/Blosc/python-blosc2/blob/main/examples/ndarray/jit-numpy-funcs.py
See benchmarks at: https://github.com/Blosc/python-blosc2/blob/main/bench/ndarray/jit-numpy-funcs.py -
The performance of constructors like
arange()
,linspace()
andfromiter()
has been improved. Now, they can be up to 3x faster, specially with large
arrays. -
C-Blosc2 updated to 2.17.1. This fixes various UB as well as compiler warnings.
Release 3.2.0
Changes from 3.1.1 to 3.2.0
-
Structured arrays can be larger than 255 bytes now. This was a limitation in the previous versions, but now it is gone (the new limit is ~512 MB, which I hope will be enough for some time).
-
New
blosc2.matmul()
function for computing matrix multiplication on NDArray instances. This allows for efficient computations on compressed data that can be in-memory, on-disk and in the network. See here for more information. -
Support for building WASM32 wheels. This is a new feature that allows to build wheels for WebAssembly 32-bit platforms. This is useful for running Python code in the browser.
-
Tested support for NumPy<2 (at least 1.26 series). Now, the library should work with NumPy 1.26 and up.
-
C-Blosc2 updated to 2.17.0.
-
httpx
has been replaced by therequests
library for the remote proxy. This was necessary to avoid the need of thehttpx
library, which is not supported by Pyodide.
Release 3.1.1 (hot fix)
Changes from 3.1.0 to 3.1.1
- Quick release to fix an issue with version number in the package (was reporting 3.0.0
instead of 3.1.0).
Release 3.1.0
Changes from 3.0.0 to 3.1.0
Improvements
-
Optimizations for the compute engine. Now, it is faster and uses less memory.
In particular, careful attention has been paid to the memory handling, as
this is the main bottleneck for the compute engine in many instances. -
Improved detection of CPU cache sizes for Linux and macOS. In particular,
support for multi-CCX (AMD EPYC) and multi-socket systems has been implemented.
Now, the library should be able to detect the cache sizes for most of the
CPUs out there (specially on Linux). -
Optimization on NDArray slicing when the slice is a single chunk. This is a
common operation when working with NDArray instances, and now it is faster.
New API functions and decorators
-
New
blosc2.evaluate()
function for evaluating expressions on NDArray/NumPy
instances. This a drop-in replacement ofnumexpr.evaluate()
, but with the
next improvements:- More functionality than numexpr (e.g. reductions).
- Follow casting rules of NumPy more closely.
- Use both NumPy arrays and Blosc2 NDArrays in the same expression.
See here for more information.
-
New
blosc2.jit
decorator for allowing NumPy expressions to be computed
using the Blosc2 compute engine. This is a powerful feature that allows for
efficient computations on compressed data, and supports advanced features like
reductions, filters and broadcasting. See here for more information. -
Support
out=
inblosc2.mean()
,blosc2.std()
andblosc2.var()
reductions
(besidesblosc2.sum()
andblosc2.prod()
).
Others
Python-Blosc2 3.0.0 (final)
Changes from 3.0.0-rc.3 to 3.0.0
-
A persistent cache for cpuinfo (stored in
$HOME/.blosc2-cpuinfo.json
) is
now used to avoid repeated calls to the cpuinfo library. This accelerates
the startup time of the library considerably (up to 5x on my box). -
We should be creating conda packages now. Thanks to @hmaarrfk for his
assistance in this area.
Release 3.0.0 rc3
Changes from 3.0.0-rc.2 to 3.0.0-rc.3
-
Now you can get and set the whole values of VLMeta instances with the
vlmeta[:]
syntax.
The get part is syntactic sugar forvlmeta.getall()
actually. -
blosc2.copy()
now honorscparams=
parameter. -
Now, compiling the package with
USE_SYSTEM_BLOSC2
envar set to1
will use the
system-wide Blosc2 library. This is useful for creating packages that do not want
to bundle the Blosc2 library (e.g. conda). -
Several changes in the build process to enable conda-forge packaging.
-
Now,
blosc2.pack_tensor()
can pack empty tensors/arrays. Fixes #290.
Release 3.0.0 rc2
Changes from 3.0.0-rc.1 to 3.0.0-rc.2
-
Improved docs, tutorials and examples. Have a look at our new docs at: https://www.blosc.org/python-blosc2.
-
blosc2.save()
is usingcontiguous=True
by default now. -
vlmeta[:]
is syntatic sugar for vlmeta.getall() now. -
Add
NDArray.meta
property as a proxy toNDArray.shunk.vlmeta
. -
Reductions over single fields in structured NDArrays are now supported. For example, given an array
sarr
with fields 'a', 'b' and 'c',sarr["a"]["b >= c"].std()
returns the standard deviation of the values in field 'a' for the rows that fulfills that values in fields in 'b' are larger than values in 'c' (b >= c
above). -
As per discussion #337, the default of cparams.splitmode is now AUTO_SPLIT. See #338 though.
Release 3.0.0 rc1
Changes from 3.0.0-beta.4 to 3.0.0-rc.1
General improvements
-
New ufunc support for NDArray instances. Now, you can use NumPy ufuncs on NDArray instances, and mix them with other NumPy arrays. This is a powerful feature that allows for more interoperability with NumPy.
-
Enhanced dtype inference, so that it mimics now more NumPy than the numexpr one. Although perfect adherence to NumPy casting conventions is not there yet, it is a big step forward towards better compatibility with NumPy.
-
Fix dtype for sum and prod reductions. Now, the dtype of the result of a sum or prod reduction is the same as the input array, unless the dtype is not supported by the reduction, in which case the dtype is promoted to a supported one. It is more NumPy-like now.
-
Many improvements on the computation of UDFs (User Defined Functions). Now, the lazy UDF computation is way more robust and efficient.
-
Support reductions inside queries in structured NDArrays. For example, given an array
sarr
with fields 'a', 'b' and 'c', the nextfarr = sarr["b >= c"].sum("a").compute()
puts infarr
the sum of the values in field 'a' for the rows that fulfills that values in fields in 'b' are larger than values in 'c' (b >= c above). -
Implemented combining data filtering, as well as sorting, in structured NDArrays. For example, given an array
sarr
with fields 'a', 'b' and 'c', the nextfarr = sarr["b >= c"].indices(order="c").compute()
puts in farr the indices of the rows that fulfills that values in fields in 'b' are larger than values in 'c' (b >= c
above), ordered by column 'c'. -
Reductions can be stored in persistent lazy expressions. Now, if you have a lazy expression that contains a reduction, the result of the reduction is preserved in the expression, so that you can reuse it later on. See https://www.blosc.org/posts/persistent-reductions/ for more information.
-
Many improvements in ruff linting and code style. Thanks to @DimitriPapadopoulos for the excellent work in this area.
API changes
LazyArray.eval()
has been renamed toLazyArray.compute()
. This avoids confusion with theeval()
function in Python, and it is more in line with the Dask API.
This is the main change in the API that is not backward compatible with previous beta. If you have code that still uses LazyArray.eval()
, you should change it to LazyArray.compute()
. Starting from this release, the API will be stable and backward compatibility will be maintained.
New API calls
-
New
reshape()
function andNDArray.reshape()
method allow to do efficient reshaping between NDArrays that follows C order. Only 1-dim -> n-dim is currently supported though. -
New NDArray.__iter__()
iterator following NumPy conventions. -
Now,
NDArray.__getitem__()
supports (n-dim) bool arrays or sequences of integers as indices (only 1-dim for now). This follows NumPy conventions. -
A new
NDField.__setitem__()
has been added to allow for setting values in a structured NDArray. -
struct_ndarr['field']
now works as in NumPy, that is, it returns an array with the values in 'field' in the structured NDArray. -
Several new constructors are available for creating NDArray instances, like
arange()
,linspace()
andfromiter()
. These constructors leverage the internallazyudf()
function and make it easier to create NDArray instances from scratch. See e.g. https://github.com/Blosc/python-blosc2/blob/main/examples/ndarray/arange-constructor.py for an example. -
Structured LazyArrays received a new
.indices()
method that returns the indices of the elements that fulfill a condition. When combined with the new support of list of indices as key forNDArray.__getitem__()
, this is useful for creating indexes for data. See https://github.com/Blosc/python-blosc2/blob/main/examples/ndarray/filter_sort_fields.py for an example. -
LazyArrays received a new
.sort()
method that sorts the elements in the array. For example, given an arraysarr
with fields 'a', 'b' and 'c', the nextfarr = sarr["b >= c"].sort("c").compute()
puts infarr
the rows that fulfills that values in fields in 'b' are larger than values in 'c' (b >= c
above), ordered by column 'c'. -
New
expr_operands()
function for extracting operands from a string expression. -
New
validate_expr()
function for validating a string expression. -
New
CParams
,DParams
andStorage
dataclasses for better handling of parameters in the library. Now, you can use these dataclasses to pass parameters to the library, and get a better error handling. Thanks to @martaiborra for the excellent implementation and @omaech for revamping docs and examples to use them. See e.g. https://www.blosc.org/python-blosc2/getting_started/tutorials/02.lazyarray-expressions.html.
Documentation improvements
-
Much improved documentation on how to efficiently compute with compressed NDArray data. Documentation updates highlight these features and improve usability for new users. Thanks to @omaech and @martaiborra for their excellent work on the documentation and examples, and to @numfocus for their support in making this possible! See https://www.blosc.org/python-blosc2/getting_started/tutorials/04.reductions.html for an example.
-
New remote proxy tutorial. This tutorial shows how to use the Proxy class to access remote arrays, while providing caching. https://www.blosc.org/python-blosc2/getting_started/tutorials/06.remote_proxy.html . Thanks to @omaech for her work on this tutorial.
-
New tutorial on "Mastering Persistent, Dynamic Reductions and Lazy Expressions". See https://www.blosc.org/posts/persistent-reductions/