Skip to content

Commit f77bf79

Browse files
authored
More cohorts speedups (#290)
* Small speedup to cohorts again ``` | Before [19db5b3] <v0.8.2> | After [9d3285e2] <speedup-cohorts> | Ratio | Benchmark (Parameter) | |------------------------------|--------------------------------------|---------|---------------------------------------------------------| | 8.70±0.3ms | 7.23±0.4ms | 0.83 | cohorts.ERA5MonthHourRechunked.time_find_group_cohorts | | 8.34±0.08ms | 6.80±0.07ms | 0.81 | cohorts.ERA5MonthHour.time_find_group_cohorts | | 1.13±0.02ms | 609±20μs | 0.54 | cohorts.PerfectMonthlyRechunked.time_find_group_cohorts | | 1.12±0.02ms | 592±4μs | 0.53 | cohorts.PerfectMonthly.time_find_group_cohorts | | 3.46±0.03ms | 1.43±0.01ms | 0.41 | cohorts.ERA5Google.time_find_group_cohorts | ``` * Check set membership instead of tuples ``` | Before [19db5b3] <v0.8.2> | After [bc7fee3e] <speedup-cohorts~1> | Ratio | Benchmark (Parameter) | |------------------------------|----------------------------------------|---------|---------------------------------------------------------| | 221±5ms | 196±0.6ms | 0.89 | cohorts.NWMMidwest.time_find_group_cohorts | | 7.47±0.04ms | 6.25±0.04ms | 0.84 | cohorts.ERA5MonthHour.time_find_group_cohorts | | 7.82±0.1ms | 6.36±0.03ms | 0.81 | cohorts.ERA5MonthHourRechunked.time_find_group_cohorts | | 1.02±0ms | 549±2μs | 0.54 | cohorts.PerfectMonthly.time_find_group_cohorts | | 1.02±0ms | 550±1μs | 0.54 | cohorts.PerfectMonthlyRechunked.time_find_group_cohorts | | 3.25±0.01ms | 1.31±0ms | 0.4 | cohorts.ERA5Google.time_find_group_cohorts | | 64.9±0.3ms | 21.9±0.09ms | 0.34 | cohorts.ERA5DayOfYearRechunked.time_find_group_cohorts | ``` * Another attempt. Better for large labels... ``` | Before [19db5b3] <v0.8.2> | After [a2036e1b] <speedup-cohorts> | Ratio | Benchmark (Parameter) | |------------------------------|--------------------------------------|---------|---------------------------------------------------------| | 3.21±0.01ms | 7.91±0.06ms | 2.46 | cohorts.ERA5Google.time_find_group_cohorts | | 1.01±0ms | 1.93±0.04ms | 1.9 | cohorts.PerfectMonthly.time_find_group_cohorts | | 1.02±0ms | 1.90±0.01ms | 1.87 | cohorts.PerfectMonthlyRechunked.time_find_group_cohorts | | 7.84±0.06ms | 12.2±0.6ms | 1.55 | cohorts.ERA5MonthHourRechunked.time_find_group_cohorts | | 7.55±0.03ms | 10.7±0.07ms | 1.42 | cohorts.ERA5MonthHour.time_find_group_cohorts | | 225±10ms | 78.6±1ms | 0.35 | cohorts.NWMMidwest.time_find_group_cohorts | ``` * Revert "Another attempt. Better for large labels..." This reverts commit e2c67ff. * [revert] * bitmask approach ``` | Before [19db5b3] <v0.8.2> | After [bb71bc4d] <speedup-cohorts> | Ratio | Benchmark (Parameter) | |------------------------------|--------------------------------------|---------|---------------------------------------------------------| | 24.8±0.07ms | 19.9±0.2ms | 0.8 | cohorts.ERA5DayOfYear.time_find_group_cohorts | | 3.23±0.01ms | 1.24±0.01ms | 0.38 | cohorts.ERA5Google.time_find_group_cohorts | | 1.01±0ms | 297±0.5μs | 0.29 | cohorts.PerfectMonthly.time_find_group_cohorts | | 1.02±0ms | 298±0.5μs | 0.29 | cohorts.PerfectMonthlyRechunked.time_find_group_cohorts | | 64.9±0.2ms | 16.5±0.3ms | 0.25 | cohorts.ERA5DayOfYearRechunked.time_find_group_cohorts | | 7.66±0.02ms | 1.83±0.01ms | 0.24 | cohorts.ERA5MonthHourRechunked.time_find_group_cohorts | | 217±3ms | 52.9±2ms | 0.24 | cohorts.NWMMidwest.time_find_group_cohorts | | 7.55±0.02ms | 1.70±0ms | 0.23 | cohorts.ERA5MonthHour.time_find_group_cohorts | ``` * Change order of tokenize Small incremental change ``` | Before [b1fd3be] <speedup-cohorts~1> | After [056ce4d0] <speedup-cohorts> | Ratio | Benchmark (Parameter) | |-----------------------------------------|--------------------------------------|---------|-----------------------------------------| | 170±1ms | 143±1ms | 0.84 | cohorts.NWMMidwest.time_graph_construct | * Another set optimization | Before [d4f3b80] <speedup-cohorts~1> | After [7583969e] <speedup-cohorts> | Ratio | Benchmark (Parameter) | |-----------------------------------------|--------------------------------------|---------|--------------------------------------------------------| | 16.3±0.2ms | 6.27±0.04ms | 0.38 | cohorts.ERA5DayOfYearRechunked.time_find_group_cohorts | | 20.0±0.02ms | 7.45±0.01ms | 0.37 | cohorts.ERA5DayOfYear.time_find_group_cohorts | ``` * switch to containment * [revert] Revert "switch to containment" This reverts commit e082cbd. * Sparse array bitmask ``` | Change | Before [97ce15f] <speedup-cohorts~1> | After [1b79831] <speedup-cohorts> | Ratio | Benchmark (Parameter) | |----------|-----------------------------------------|--------------------------------------|---------|---------------------------------------------------------| | + | 233±0.5μs | 519±3μs | 2.23 | cohorts.PerfectMonthly.time_find_group_cohorts | | + | 232±0.8μs | 518±1μs | 2.23 | cohorts.PerfectMonthlyRechunked.time_find_group_cohorts | | + | 1.01±0.01ms | 2.14±0.05ms | 2.13 | cohorts.ERA5Google.time_find_group_cohorts | | + | 1.48±0.01ms | 2.27±0.01ms | 1.53 | cohorts.ERA5MonthHourRechunked.time_find_group_cohorts | | + | 1.39±0ms | 2.11±0ms | 1.52 | cohorts.ERA5MonthHour.time_find_group_cohorts | | + | 2.66±0.01ms | 2.99±0.08ms | 1.12 | cohorts.PerfectMonthly.time_graph_construct | | - | 22.5±0.06ms | 17.4±0.3ms | 0.77 | cohorts.NWMMidwest.time_find_group_cohorts | ``` * speed up map_blocks a little. tokenizing **kwargs is slower than tokenizing *args * VIsualize chunks rather than cohort labels * Add scipy to minimal requirements * Add back memoize * Minor comments
1 parent 15324a7 commit f77bf79

12 files changed

+124
-80
lines changed

asv_bench/benchmarks/cohorts.py

+7-10
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,8 @@
11
import dask
22
import numpy as np
33
import pandas as pd
4-
import xarray as xr
54

65
import flox
7-
from flox.xarray import xarray_reduce
86

97

108
class Cohorts:
@@ -129,11 +127,10 @@ def setup(self, *args, **kwargs):
129127
super().rechunk()
130128

131129

132-
def time_cohorts_era5_single():
133-
TIME = 900 # 92044 in Google ARCO ERA5
134-
da = xr.DataArray(
135-
dask.array.ones((TIME, 721, 1440), chunks=(1, -1, -1)),
136-
dims=("time", "lat", "lon"),
137-
coords=dict(time=pd.date_range("1959-01-01", freq="6H", periods=TIME)),
138-
)
139-
xarray_reduce(da, da.time.dt.day, method="cohorts", func="any")
130+
class ERA5Google(Cohorts):
131+
def setup(self, *args, **kwargs):
132+
TIME = 900 # 92044 in Google ARCO ERA5
133+
self.time = pd.Series(pd.date_range("1959-01-01", freq="6H", periods=TIME))
134+
self.axis = (2,)
135+
self.array = dask.array.ones((721, 1440, TIME), chunks=(-1, -1, 1))
136+
self.by = self.time.dt.day.values

ci/benchmark.yml

+1
Original file line numberDiff line numberDiff line change
@@ -13,3 +13,4 @@ dependencies:
1313
- numpy_groupies>=0.9.19
1414
- numbagg>=0.3
1515
- wheel
16+
- scipy

ci/docs.yml

+1
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ dependencies:
66
- pip
77
- xarray
88
- numpy>=1.22
9+
- scipy
910
- numpydoc
1011
- numpy_groupies>=0.9.19
1112
- toolz

ci/environment.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ dependencies:
99
- netcdf4
1010
- pandas
1111
- numpy>=1.22
12+
- scipy
1213
- lxml # for mypy coverage report
1314
- matplotlib
1415
- pip
@@ -24,4 +25,3 @@ dependencies:
2425
- toolz
2526
- numba
2627
- numbagg>=0.3
27-
- scipy

ci/minimal-requirements.yml

+1
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ dependencies:
1010
- pytest-pretty
1111
- pytest-xdist
1212
- numpy==1.22
13+
- scipy
1314
- numpy_groupies==0.9.19
1415
- pandas
1516
- pooch

ci/no-dask.yml

+1
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ dependencies:
66
- netcdf4
77
- pandas
88
- numpy>=1.22
9+
- scipy
910
- pip
1011
- pytest
1112
- pytest-cov

ci/no-numba.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ dependencies:
99
- netcdf4
1010
- pandas
1111
- numpy>=1.22
12+
- scipy
1213
- lxml # for mypy coverage report
1314
- matplotlib
1415
- pip
@@ -21,4 +22,3 @@ dependencies:
2122
- numpy_groupies>=0.9.19
2223
- pooch
2324
- toolz
24-
- scipy

ci/no-xarray.yml

+1
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ dependencies:
66
- netcdf4
77
- pandas
88
- numpy>=1.22
9+
- scipy
910
- pip
1011
- pytest
1112
- pytest-cov

flox/core.py

+77-38
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@
99
from collections import namedtuple
1010
from collections.abc import Sequence
1111
from functools import partial, reduce
12+
from itertools import product
1213
from numbers import Integral
1314
from typing import (
1415
TYPE_CHECKING,
@@ -23,6 +24,7 @@
2324
import numpy_groupies as npg
2425
import pandas as pd
2526
import toolz as tlz
27+
from scipy.sparse import csc_array
2628

2729
from . import xrdtypes
2830
from .aggregate_flox import _prepare_for_flox
@@ -203,6 +205,16 @@ def _unique(a: np.ndarray) -> np.ndarray:
203205
return np.sort(pd.unique(a.reshape(-1)))
204206

205207

208+
def slices_from_chunks(chunks):
209+
"""slightly modified from dask.array.core.slices_from_chunks to be lazy"""
210+
cumdims = [tlz.accumulate(operator.add, bds, 0) for bds in chunks]
211+
slices = (
212+
(slice(s, s + dim) for s, dim in zip(starts, shapes))
213+
for starts, shapes in zip(cumdims, chunks)
214+
)
215+
return product(*slices)
216+
217+
206218
@memoize
207219
def find_group_cohorts(labels, chunks, merge: bool = True) -> dict:
208220
"""
@@ -215,9 +227,10 @@ def find_group_cohorts(labels, chunks, merge: bool = True) -> dict:
215227
Parameters
216228
----------
217229
labels : np.ndarray
218-
mD Array of group labels
230+
mD Array of integer group codes, factorized so that -1
231+
represents NaNs.
219232
chunks : tuple
220-
nD array that is being reduced
233+
chunks of the array being reduced
221234
merge : bool, optional
222235
Attempt to merge cohorts when one cohort's chunks are a subset
223236
of another cohort's chunks.
@@ -227,33 +240,59 @@ def find_group_cohorts(labels, chunks, merge: bool = True) -> dict:
227240
cohorts: dict_values
228241
Iterable of cohorts
229242
"""
230-
import dask
231-
232243
# To do this, we must have values in memory so casting to numpy should be safe
233244
labels = np.asarray(labels)
234245

235-
# Build an array with the shape of labels, but where every element is the "chunk number"
236-
# 1. First subset the array appropriately
237-
axis = range(-labels.ndim, 0)
238-
# Easier to create a dask array and use the .blocks property
239-
array = dask.array.empty(tuple(sum(c) for c in chunks), chunks=chunks)
240-
labels = np.broadcast_to(labels, array.shape[-labels.ndim :])
241-
242-
# Iterate over each block and create a new block of same shape with "chunk number"
243-
shape = tuple(array.blocks.shape[ax] for ax in axis)
244-
# Use a numpy object array to enable assignment in the loop
245-
# TODO: is it possible to just use a nested list?
246-
# That is what we need for `np.block`
247-
blocks = np.empty(shape, dtype=object)
248-
array_chunks = tuple(np.array(c) for c in array.chunks)
249-
for idx, blockindex in enumerate(np.ndindex(array.numblocks)):
250-
chunkshape = tuple(c[i] for c, i in zip(array_chunks, blockindex))
251-
blocks[blockindex] = np.full(chunkshape, idx)
252-
which_chunk = np.block(blocks.tolist()).reshape(-1)
253-
254-
raveled = labels.reshape(-1)
255-
# these are chunks where a label is present
256-
label_chunks = pd.Series(which_chunk).groupby(raveled).unique()
246+
shape = tuple(sum(c) for c in chunks)
247+
nchunks = math.prod(len(c) for c in chunks)
248+
249+
# assumes that `labels` are factorized
250+
nlabels = labels.max() + 1
251+
252+
labels = np.broadcast_to(labels, shape[-labels.ndim :])
253+
254+
rows = []
255+
cols = []
256+
# Add one to handle the -1 sentinel value
257+
label_is_present = np.zeros((nlabels + 1,), dtype=bool)
258+
ilabels = np.arange(nlabels)
259+
for idx, region in enumerate(slices_from_chunks(chunks)):
260+
# This is a quite fast way to find unique integers, when we know how many there are
261+
# inspired by a similar idea in numpy_groupies for first, last
262+
# instead of explicitly finding uniques, repeatedly write True to the same location
263+
subset = labels[region]
264+
# The reshape is not strictly necessary but is about 100ms faster on a test problem.
265+
label_is_present[subset.reshape(-1)] = True
266+
# skip the -1 sentinel by slicing
267+
uniques = ilabels[label_is_present[:-1]]
268+
rows.append([idx] * len(uniques))
269+
cols.append(uniques)
270+
label_is_present[:] = False
271+
rows_array = np.concatenate(rows)
272+
cols_array = np.concatenate(cols)
273+
data = np.broadcast_to(np.array(1, dtype=np.uint8), rows_array.shape)
274+
bitmask = csc_array((data, (rows_array, cols_array)), dtype=bool, shape=(nchunks, nlabels))
275+
label_chunks = {
276+
lab: bitmask.indices[slice(bitmask.indptr[lab], bitmask.indptr[lab + 1])]
277+
for lab in range(nlabels)
278+
}
279+
280+
## numpy bitmask approach, faster than finding uniques, but lots of memory
281+
# bitmask = np.zeros((nchunks, nlabels), dtype=bool)
282+
# for idx, region in enumerate(slices_from_chunks(chunks)):
283+
# bitmask[idx, labels[region]] = True
284+
# bitmask = bitmask[:, :-1]
285+
# chunk = np.arange(nchunks) # [:, np.newaxis] * bitmask
286+
# label_chunks = {lab: chunk[bitmask[:, lab]] for lab in range(nlabels - 1)}
287+
288+
## Pandas GroupBy approach, quite slow!
289+
# which_chunk = np.empty(shape, dtype=np.int64)
290+
# for idx, region in enumerate(slices_from_chunks(chunks)):
291+
# which_chunk[region] = idx
292+
# which_chunk = which_chunk.reshape(-1)
293+
# raveled = labels.reshape(-1)
294+
# # these are chunks where a label is present
295+
# label_chunks = pd.Series(which_chunk).groupby(raveled).unique()
257296

258297
# These invert the label_chunks mapping so we know which labels occur together.
259298
def invert(x) -> tuple[np.ndarray, ...]:
@@ -264,33 +303,31 @@ def invert(x) -> tuple[np.ndarray, ...]:
264303

265304
# If our dataset has chunksize one along the axis,
266305
# then no merging is possible.
267-
single_chunks = all((ac == 1).all() for ac in array_chunks)
306+
single_chunks = all(all(a == 1 for a in ac) for ac in chunks)
268307

269-
if merge and not single_chunks:
308+
if not single_chunks and merge:
270309
# First sort by number of chunks occupied by cohort
271310
sorted_chunks_cohorts = dict(
272311
sorted(chunks_cohorts.items(), key=lambda kv: len(kv[0]), reverse=True)
273312
)
274313

275-
items = tuple(sorted_chunks_cohorts.items())
314+
items = tuple((k, set(k), v) for k, v in sorted_chunks_cohorts.items() if k)
276315

277316
merged_cohorts = {}
278-
merged_keys = []
317+
merged_keys = set()
279318

280319
# Now we iterate starting with the longest number of chunks,
281320
# and then merge in cohorts that are present in a subset of those chunks
282321
# I think this is suboptimal and must fail at some point.
283322
# But it might work for most cases. There must be a better way...
284-
for idx, (k1, v1) in enumerate(items):
323+
for idx, (k1, set_k1, v1) in enumerate(items):
285324
if k1 in merged_keys:
286325
continue
287326
merged_cohorts[k1] = copy.deepcopy(v1)
288-
for k2, v2 in items[idx + 1 :]:
289-
if k2 in merged_keys:
290-
continue
291-
if set(k2).issubset(set(k1)):
327+
for k2, set_k2, v2 in items[idx + 1 :]:
328+
if k2 not in merged_keys and set_k2.issubset(set_k1):
292329
merged_cohorts[k1].extend(v2)
293-
merged_keys.append(k2)
330+
merged_keys.update((k2,))
294331

295332
# make sure each cohort is sorted after merging
296333
sorted_merged_cohorts = {k: sorted(v) for k, v in merged_cohorts.items()}
@@ -1373,7 +1410,6 @@ def dask_groupby_agg(
13731410

13741411
inds = tuple(range(array.ndim))
13751412
name = f"groupby_{agg.name}"
1376-
token = dask.base.tokenize(array, by, agg, expected_groups, axis)
13771413

13781414
if expected_groups is None and reindex:
13791415
expected_groups = _get_expected_groups(by, sort=sort)
@@ -1394,6 +1430,9 @@ def dask_groupby_agg(
13941430
by = dask.array.from_array(by, chunks=chunks)
13951431
_, (array, by) = dask.array.unify_chunks(array, inds, by, inds[-by.ndim :])
13961432

1433+
# tokenize here since by has already been hashed if its numpy
1434+
token = dask.base.tokenize(array, by, agg, expected_groups, axis)
1435+
13971436
# preprocess the array:
13981437
# - for argreductions, this zips the index together with the array block
13991438
# - not necessary for blockwise with argreductions
@@ -1510,7 +1549,7 @@ def dask_groupby_agg(
15101549
index = pd.Index(cohort)
15111550
subset = subset_to_blocks(intermediate, blks, array.blocks.shape[-len(axis) :])
15121551
reindexed = dask.array.map_blocks(
1513-
reindex_intermediates, subset, agg=agg, unique_groups=index, meta=subset._meta
1552+
reindex_intermediates, subset, agg, index, meta=subset._meta
15141553
)
15151554
# now that we have reindexed, we can set reindex=True explicitlly
15161555
reduced_.append(

flox/visualize.py

+25-24
Original file line numberDiff line numberDiff line change
@@ -121,44 +121,44 @@ def get_colormap(N):
121121
ncolors = len(cmap.colors)
122122
q = N // ncolors
123123
r = N % ncolors
124-
cmap = mpl.colors.ListedColormap(np.concatenate([cmap.colors] * q + [cmap.colors[:r]]))
125-
cmap.set_under(color="w")
124+
cmap = mpl.colors.ListedColormap(np.concatenate([cmap.colors] * q + [cmap.colors[: r + 1]]))
125+
cmap.set_under(color="k")
126126
return cmap
127127

128128

129-
def factorize_cohorts(by, cohorts):
130-
factorized = np.full(by.shape, -1)
129+
def factorize_cohorts(chunks, cohorts):
130+
chunk_grid = tuple(len(c) for c in chunks)
131+
nchunks = np.prod(chunk_grid)
132+
factorized = np.full((nchunks,), -1, dtype=np.int64)
131133
for idx, cohort in enumerate(cohorts):
132-
factorized[np.isin(by, cohort)] = idx
133-
return factorized
134+
factorized[list(cohort)] = idx
135+
return factorized.reshape(chunk_grid)
134136

135137

136-
def visualize_cohorts_2d(by, array):
138+
def visualize_cohorts_2d(by, chunks):
137139
assert by.ndim == 2
138140
print("finding cohorts...")
139-
before_merged = find_group_cohorts(
140-
by, [array.chunks[ax] for ax in range(-by.ndim, 0)], merge=False
141-
).values()
142-
merged = find_group_cohorts(
143-
by, [array.chunks[ax] for ax in range(-by.ndim, 0)], merge=True
144-
).values()
141+
chunks = [chunks[ax] for ax in range(-by.ndim, 0)]
142+
before_merged = find_group_cohorts(by, chunks, merge=False)
143+
merged = find_group_cohorts(by, chunks, merge=True)
145144
print("finished cohorts...")
146145

147-
xticks = np.cumsum(array.chunks[-1])
148-
yticks = np.cumsum(array.chunks[-2])
146+
xticks = np.cumsum(chunks[-1])
147+
yticks = np.cumsum(chunks[-2])
149148

150-
f, ax = plt.subplots(2, 2, constrained_layout=True, sharex=True, sharey=True)
149+
f, ax = plt.subplots(1, 3, constrained_layout=True, sharex=False, sharey=False)
151150
ax = ax.ravel()
152-
ax[1].set_visible(False)
153-
ax = ax[[0, 2, 3]]
151+
# ax[1].set_visible(False)
152+
# ax = ax[[0, 2, 3]]
154153

155154
ngroups = len(_unique(by))
156-
h0 = ax[0].imshow(by, cmap=get_colormap(ngroups))
157-
h1 = _visualize_cohorts(by, before_merged, ax=ax[1])
158-
h2 = _visualize_cohorts(by, merged, ax=ax[2])
155+
h0 = ax[0].imshow(by, vmin=0, cmap=get_colormap(ngroups))
156+
h1 = _visualize_cohorts(chunks, before_merged, ax=ax[1])
157+
h2 = _visualize_cohorts(chunks, merged, ax=ax[2])
159158

160159
for axx in ax:
161160
axx.grid(True, which="both")
161+
for axx in ax[:1]:
162162
axx.set_xticks(xticks)
163163
axx.set_yticks(yticks)
164164
for h, axx in zip([h0, h1, h2], ax):
@@ -167,14 +167,15 @@ def visualize_cohorts_2d(by, array):
167167
ax[0].set_title(f"by: {ngroups} groups")
168168
ax[1].set_title(f"{len(before_merged)} cohorts")
169169
ax[2].set_title(f"{len(merged)} merged cohorts")
170-
f.set_size_inches((6, 6))
170+
f.set_size_inches((12, 6))
171171

172172

173-
def _visualize_cohorts(by, cohorts, ax=None):
173+
def _visualize_cohorts(chunks, cohorts, ax=None):
174174
if ax is None:
175175
_, ax = plt.subplots(1, 1)
176176

177-
ax.imshow(factorize_cohorts(by, cohorts), vmin=0, cmap=get_colormap(len(cohorts)))
177+
data = factorize_cohorts(chunks, cohorts)
178+
return ax.imshow(data, vmin=0, cmap=get_colormap(len(cohorts)))
178179

179180

180181
def visualize_groups_2d(labels, y0=0, **kwargs):

pyproject.toml

+2
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,7 @@ requires = [
4141
"pandas",
4242
"numpy>=1.22",
4343
"numpy_groupies>=0.9.19",
44+
"scipy",
4445
"toolz",
4546
"setuptools>=61.0.0",
4647
"setuptools_scm[toml]>=7.0",
@@ -101,6 +102,7 @@ known-third-party = [
101102
"pkg_resources",
102103
"pytest",
103104
"setuptools",
105+
"scipy",
104106
"xarray"
105107
]
106108

0 commit comments

Comments
 (0)