Releases: Blosc/python-blosc2
Release 3.5.1
Changes from 3.5.0 to 3.5.1
-
Reduced memory usage when computing slices of lazy expressions.
This is a significant improvement for large arrays (up to 20x less).
Also, we have added a fast path for slices that are small and fit in
memory, which can be up to 20x faster than the previous implementation.
See PR #430. -
blosc2.concatenate()
has been renamed toblosc2.concat()
.
This is in line with the Array API.
The old name is still available for backward compatibility, but it will
be removed in a future release. -
Improve mode handling for concatenating to disk. See PR #428.
Useful for concatenating arrays that are stored in disk, and allows
specifying the mode to use when concatenating.
Release 3.5.0
Changes from 3.4.0 to 3.5.0
-
New
blosc2.stack()
function for stacking multiple arrays along a new axis.
Useful for creating multi-dimensional arrays from multiple 1D arrays.
See PR #427. Thanks to Luke Shaw for the implementation!
Blog: https://www.blosc.org/posts/blosc2-new-concatenate/#stacking-arrays -
New
blosc2.expand_dims()
function for expanding the dimensions of an array.
This is useful for adding a new axis to an array, similar to NumPy'snp.expand_dims()
.
See PR #427. Thanks to Luke Shaw for the implementation!
v3.4.0
Summary
This release adds significant new functionality in the form of concatenate
. We support general concatenation of ndarrays, and offer an optimised path with significant speedups for the case of concatenating arrays with compatible chunk and blockshapes. In addition, there are bug fixes and more functionality for slicing of lazyexprs, and the possibility to jit compile user-defined functions which operate on pandas objects using the blosc2 engine.
What's Changed
- Enable slice lazy by @lshaw8317 in #417
- Add support for new pandas UDF engine by @datapythonista in #418
- Make behaviour of compute consistent for slicing by @lshaw8317 in #419
- Update pre-commit hooks by @pre-commit-ci in #422
- Concatenate by @FrancescAlted in #423
Full Changelog: v3.3.4...v3.4.0
Blosc2 v3.3.4
This is a bugfix release, with some minor optimizations. We further improved the
correct chaining of string lazy expressions (to allow operands with more
diverse data types). In addition, both indexing and where expressions are now
supported within string lazy expressions. Finally, casting rules have
been improved to be more consistent with NumPy. In summary:
-
Expand possibilities for chaining string-based lazy expressions to incorporate
data types which do not have shape attribute, e.g. int, float etc.
See #406 and PR #411. -
Enable slicing within string-based lazy expressions. See PR #414.
-
Improved casting for string-based lazy expressions.
-
Documentation improvements, see PR #410.
-
Compatibility fixes for working with
h5py
files.
Release 3.3.3
Changes from 3.3.2 to 3.3.3
-
Expand possibilities for chaining string-based lazy expressions to include
main operand types (LazyExpr and NDArray). Still have to incorporate other
data types (which do not have shape attribute, e.g. int, float etc.).
See #406. -
Fix indexing for lazy expressions, and allow use of None in getitem.
See PR #402. -
Fix incorrect appending of dim to computed reductions. See PR #404.
-
Fix
blosc2.linspace()
for incompatible num/shape. See PR #408. -
Add support for NumPy dtypes that are n-dimensional (e.g.
np.dtype(("<i4,>f4", (10,))),
). -
New MAX_DIM constant for the maximum number of dimensions supported.
This is useful for checking if a given array is too large to be handled. -
More refinements on guessing cache sizes for Linux.
-
Update to C-Blosc2 2.17.2.dev. Now, we are forcing the flush of modified
pages only in write mode for mmap files. This fixes mmap issues on Windows.
Thanks to @JanSellner for the implementation.
Release 3.3.2
Changes from 3.3.1 to 3.3.2
-
Fixed a bug in the determination of chunk shape for the
NDArray
constructor.
This was causing problems when creatingNDArray
instances with a CPU that
was reporting a L3 cache size close (or exceeding) 2 GB. See PR #392. -
Fixed a bug preventing the correct chaining of string lazy expressions for
logical operators (&
,|
,^
...). See PR #391. -
More performance optimization for
blosc2.permute_dims
. Thanks to
Ricardo Sales Piquer (@ricardosp4) for the implementation. -
Now, storage defaults (
blosc2.storage_dflts
) are honored, even if no
storage=
param is used in constructors. -
We are distributing Python 3.10 wheels now.
Release 3.3.1
Changes from 3.3.0 to 3.3.1
-
In our effort to better adapt to better adapt to the array API
(https://data-apis.org/array-api/latest/), we have introduced
permute_dims() and matrix_transpose() functions, and the .T property.
This replaces to previous transpose() function, which is now deprecated.
See PR #384. Thanks to Ricardo Sales Piquer (@ricardosp4). -
Constructors like
arange()
,linspace()
andfromiter()
now
use far less memory when creating large arrays. As an example, a 5 TB
array of 8-byte floats now uses less than 200 MB of memory instead of
170 GB previously. See PR #387. -
Now, when opening a lazy expression with
blosc2.open()
, and there is
a missing operand, the open still works, but the dtype and shape
attributes are None. This is useful for lazy expressions that have
lost some operands, but you still want to open them for inspection.
See PR #385. -
Added an example of getting a slice out of a C2Array.
Release 3.3.0
Changes from 3.2.1 to 3.3.0
-
New
blosc2.transpose()
function for transposing 2D NDArray instances
natively. See PR #375 and docs at
https://www.blosc.org/python-blosc2/reference/autofiles/operations_with_arrays/blosc2.transpose.html#blosc2.transpose
See also our new blog about this: https://www.blosc.org/posts/transpose-compressed-matrices/
Thanks to Ricardo Sales Piquer (@ricardosp4) for the implementation. -
New fast path for
NDArray.slice()
for getting slices that are aligned with
underlying chunks. This is a common operation when working with NDArray
instances, and now it is up to 40x faster in our benchmarks (see PR #380). -
Returned
NDArray
object inNDarray.slice()
now defaults to original
codec/clevel/filters. The previous behavior was to use the default
codec/clevel/filters. See PR #378. Thanks to Luke Shaw (@lshaw8317). -
Several English edits in the documentation. Thanks to Luke Shaw (@lshaw8317)
for his help in this area.
Release 3.2.1
Changes from 3.2.0 to 3.2.1
-
The array containers are now using the
__array_interface__
protocol to
expose the data in the array. This allows for better interoperability with
other libraries that support the__array_interface__
protocol, like NumPy,
CuPy, etc. Now, the range of functions that can be used within theblosc2.jit
decorator is way larger, and essentially all NumPy functions should work now.See examples at: https://github.com/Blosc/python-blosc2/blob/main/examples/ndarray/jit-numpy-funcs.py
See benchmarks at: https://github.com/Blosc/python-blosc2/blob/main/bench/ndarray/jit-numpy-funcs.py -
The performance of constructors like
arange()
,linspace()
andfromiter()
has been improved. Now, they can be up to 3x faster, specially with large
arrays. -
C-Blosc2 updated to 2.17.1. This fixes various UB as well as compiler warnings.
Release 3.2.0
Changes from 3.1.1 to 3.2.0
-
Structured arrays can be larger than 255 bytes now. This was a limitation in the previous versions, but now it is gone (the new limit is ~512 MB, which I hope will be enough for some time).
-
New
blosc2.matmul()
function for computing matrix multiplication on NDArray instances. This allows for efficient computations on compressed data that can be in-memory, on-disk and in the network. See here for more information. -
Support for building WASM32 wheels. This is a new feature that allows to build wheels for WebAssembly 32-bit platforms. This is useful for running Python code in the browser.
-
Tested support for NumPy<2 (at least 1.26 series). Now, the library should work with NumPy 1.26 and up.
-
C-Blosc2 updated to 2.17.0.
-
httpx
has been replaced by therequests
library for the remote proxy. This was necessary to avoid the need of thehttpx
library, which is not supported by Pyodide.