Skip to content

Commit cdab326

Browse files
authored
fix typos (using codespell) (#6316)
* fix typos (using codespell) * revert 'split'
1 parent 2ab9f36 commit cdab326

32 files changed

+56
-56
lines changed

doc/examples/ROMS_ocean_model.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@
7777
"ds = xr.tutorial.open_dataset(\"ROMS_example.nc\", chunks={\"ocean_time\": 1})\n",
7878
"\n",
7979
"# This is a way to turn on chunking and lazy evaluation. Opening with mfdataset, or\n",
80-
"# setting the chunking in the open_dataset would also achive this.\n",
80+
"# setting the chunking in the open_dataset would also achieve this.\n",
8181
"ds"
8282
]
8383
},

doc/gallery/plot_colorbar_center.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,6 @@
3838
ax4.set_title("Celsius: center=False")
3939
ax4.set_ylabel("")
4040

41-
# Mke it nice
41+
# Make it nice
4242
plt.tight_layout()
4343
plt.show()

doc/internals/how-to-add-new-backend.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -317,7 +317,7 @@ grouped in three types of indexes
317317
:py:class:`~xarray.core.indexing.OuterIndexer` and
318318
:py:class:`~xarray.core.indexing.VectorizedIndexer`.
319319
This implies that the implementation of the method ``__getitem__`` can be tricky.
320-
In oder to simplify this task, Xarray provides a helper function,
320+
In order to simplify this task, Xarray provides a helper function,
321321
:py:func:`~xarray.core.indexing.explicit_indexing_adapter`, that transforms
322322
all the input ``indexer`` types (`basic`, `outer`, `vectorized`) in a tuple
323323
which is interpreted correctly by your backend.
@@ -426,7 +426,7 @@ The ``OUTER_1VECTOR`` indexing shall supports number, slices and at most one
426426
list. The behaviour with the list shall be the same of ``OUTER`` indexing.
427427

428428
If you support more complex indexing as `explicit indexing` or
429-
`numpy indexing`, you can have a look to the implemetation of Zarr backend and Scipy backend,
429+
`numpy indexing`, you can have a look to the implementation of Zarr backend and Scipy backend,
430430
currently available in :py:mod:`~xarray.backends` module.
431431

432432
.. _RST preferred_chunks:

doc/internals/zarr-encoding-spec.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ for the storage of the NetCDF data model in Zarr; see
1414
discussion.
1515

1616
First, Xarray can only read and write Zarr groups. There is currently no support
17-
for reading / writting individual Zarr arrays. Zarr groups are mapped to
17+
for reading / writing individual Zarr arrays. Zarr groups are mapped to
1818
Xarray ``Dataset`` objects.
1919

2020
Second, from Xarray's point of view, the key difference between

doc/roadmap.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ A cleaner model would be to elevate ``indexes`` to an explicit part of
112112
xarray's data model, e.g., as attributes on the ``Dataset`` and
113113
``DataArray`` classes. Indexes would need to be propagated along with
114114
coordinates in xarray operations, but will no longer would need to have
115-
a one-to-one correspondance with coordinate variables. Instead, an index
115+
a one-to-one correspondence with coordinate variables. Instead, an index
116116
should be able to refer to multiple (possibly multidimensional)
117117
coordinates that define it. See `GH
118118
1603 <https://github.com/pydata/xarray/issues/1603>`__ for full details

doc/user-guide/time-series.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ You can also select a particular time by indexing with a
101101
102102
ds.sel(time=datetime.time(12))
103103
104-
For more details, read the pandas documentation and the section on `Indexing Using Datetime Components <datetime_component_indexing>`_ (i.e. using the ``.dt`` acessor).
104+
For more details, read the pandas documentation and the section on `Indexing Using Datetime Components <datetime_component_indexing>`_ (i.e. using the ``.dt`` accessor).
105105

106106
.. _dt_accessor:
107107

doc/whats-new.rst

+6-6
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ Bug fixes
138138
By `Michael Delgado <https://github.com/delgadom>`_.
139139
- `dt.season <https://docs.xarray.dev/en/stable/generated/xarray.DataArray.dt.season.html>`_ can now handle NaN and NaT. (:pull:`5876`).
140140
By `Pierre Loicq <https://github.com/pierreloicq>`_.
141-
- Determination of zarr chunks handles empty lists for encoding chunks or variable chunks that occurs in certain cirumstances (:pull:`5526`). By `Chris Roat <https://github.com/chrisroat>`_.
141+
- Determination of zarr chunks handles empty lists for encoding chunks or variable chunks that occurs in certain circumstances (:pull:`5526`). By `Chris Roat <https://github.com/chrisroat>`_.
142142

143143
Internal Changes
144144
~~~~~~~~~~~~~~~~
@@ -706,7 +706,7 @@ Breaking changes
706706
By `Alessandro Amici <https://github.com/alexamici>`_.
707707
- Functions that are identities for 0d data return the unchanged data
708708
if axis is empty. This ensures that Datasets where some variables do
709-
not have the averaged dimensions are not accidentially changed
709+
not have the averaged dimensions are not accidentally changed
710710
(:issue:`4885`, :pull:`5207`).
711711
By `David Schwörer <https://github.com/dschwoerer>`_.
712712
- :py:attr:`DataArray.coarsen` and :py:attr:`Dataset.coarsen` no longer support passing ``keep_attrs``
@@ -1419,7 +1419,7 @@ New Features
14191419
Enhancements
14201420
~~~~~~~~~~~~
14211421
- Performance improvement of :py:meth:`DataArray.interp` and :py:func:`Dataset.interp`
1422-
We performs independant interpolation sequentially rather than interpolating in
1422+
We performs independent interpolation sequentially rather than interpolating in
14231423
one large multidimensional space. (:issue:`2223`)
14241424
By `Keisuke Fujii <https://github.com/fujiisoup>`_.
14251425
- :py:meth:`DataArray.interp` now support interpolations over chunked dimensions (:pull:`4155`). By `Alexandre Poux <https://github.com/pums974>`_.
@@ -2770,7 +2770,7 @@ Breaking changes
27702770
- ``Dataset.T`` has been removed as a shortcut for :py:meth:`Dataset.transpose`.
27712771
Call :py:meth:`Dataset.transpose` directly instead.
27722772
- Iterating over a ``Dataset`` now includes only data variables, not coordinates.
2773-
Similarily, calling ``len`` and ``bool`` on a ``Dataset`` now
2773+
Similarly, calling ``len`` and ``bool`` on a ``Dataset`` now
27742774
includes only data variables.
27752775
- ``DataArray.__contains__`` (used by Python's ``in`` operator) now checks
27762776
array data, not coordinates.
@@ -3908,7 +3908,7 @@ Bug fixes
39083908
(:issue:`1606`).
39093909
By `Joe Hamman <https://github.com/jhamman>`_.
39103910

3911-
- Fix bug when using ``pytest`` class decorators to skiping certain unittests.
3911+
- Fix bug when using ``pytest`` class decorators to skipping certain unittests.
39123912
The previous behavior unintentionally causing additional tests to be skipped
39133913
(:issue:`1531`). By `Joe Hamman <https://github.com/jhamman>`_.
39143914

@@ -5656,7 +5656,7 @@ Bug fixes
56565656
- Several bug fixes related to decoding time units from netCDF files
56575657
(:issue:`316`, :issue:`330`). Thanks Stefan Pfenninger!
56585658
- xray no longer requires ``decode_coords=False`` when reading datasets with
5659-
unparseable coordinate attributes (:issue:`308`).
5659+
unparsable coordinate attributes (:issue:`308`).
56605660
- Fixed ``DataArray.loc`` indexing with ``...`` (:issue:`318`).
56615661
- Fixed an edge case that resulting in an error when reindexing
56625662
multi-dimensional variables (:issue:`315`).

xarray/backends/common.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -160,7 +160,7 @@ def sync(self, compute=True):
160160
import dask.array as da
161161

162162
# TODO: consider wrapping targets with dask.delayed, if this makes
163-
# for any discernable difference in perforance, e.g.,
163+
# for any discernible difference in perforance, e.g.,
164164
# targets = [dask.delayed(t) for t in self.targets]
165165

166166
delayed_store = da.store(

xarray/backends/file_manager.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -204,7 +204,7 @@ def _acquire_with_cache_info(self, needs_lock=True):
204204
kwargs["mode"] = self._mode
205205
file = self._opener(*self._args, **kwargs)
206206
if self._mode == "w":
207-
# ensure file doesn't get overriden when opened again
207+
# ensure file doesn't get overridden when opened again
208208
self._mode = "a"
209209
self._cache[self._key] = file
210210
return file, False

xarray/backends/pseudonetcdf_.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ class PseudoNetCDFBackendEntrypoint(BackendEntrypoint):
105105
available = has_pseudonetcdf
106106

107107
# *args and **kwargs are not allowed in open_backend_dataset_ kwargs,
108-
# unless the open_dataset_parameters are explicity defined like this:
108+
# unless the open_dataset_parameters are explicitly defined like this:
109109
open_dataset_parameters = (
110110
"filename_or_obj",
111111
"mask_and_scale",

xarray/backends/zarr.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,7 @@ def _determine_zarr_chunks(enc_chunks, var_chunks, ndim, name, safe_chunks):
179179

180180

181181
def _get_zarr_dims_and_attrs(zarr_obj, dimension_key):
182-
# Zarr arrays do not have dimenions. To get around this problem, we add
182+
# Zarr arrays do not have dimensions. To get around this problem, we add
183183
# an attribute that specifies the dimension. We have to hide this attribute
184184
# when we send the attributes to the user.
185185
# zarr_obj can be either a zarr group or zarr array

xarray/convert.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -235,7 +235,7 @@ def _iris_cell_methods_to_str(cell_methods_obj):
235235

236236

237237
def _name(iris_obj, default="unknown"):
238-
"""Mimicks `iris_obj.name()` but with different name resolution order.
238+
"""Mimics `iris_obj.name()` but with different name resolution order.
239239
240240
Similar to iris_obj.name() method, but using iris_obj.var_name first to
241241
enable roundtripping.

xarray/core/accessor_str.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -456,7 +456,7 @@ def cat(
456456
Strings or array-like of strings to concatenate elementwise with
457457
the current DataArray.
458458
sep : str or array-like of str, default: "".
459-
Seperator to use between strings.
459+
Separator to use between strings.
460460
It is broadcast in the same way as the other input strings.
461461
If array-like, its dimensions will be placed at the end of the output array dimensions.
462462
@@ -539,7 +539,7 @@ def join(
539539
Only one dimension is allowed at a time.
540540
Optional for 0D or 1D DataArrays, required for multidimensional DataArrays.
541541
sep : str or array-like, default: "".
542-
Seperator to use between strings.
542+
Separator to use between strings.
543543
It is broadcast in the same way as the other input strings.
544544
If array-like, its dimensions will be placed at the end of the output array dimensions.
545545

xarray/core/combine.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,7 @@ def _infer_concat_order_from_coords(datasets):
135135
order = rank.astype(int).values - 1
136136

137137
# Append positions along extra dimension to structure which
138-
# encodes the multi-dimensional concatentation order
138+
# encodes the multi-dimensional concatenation order
139139
tile_ids = [
140140
tile_id + (position,) for tile_id, position in zip(tile_ids, order)
141141
]

xarray/core/computation.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1941,7 +1941,7 @@ def unify_chunks(*objects: T_Xarray) -> tuple[T_Xarray, ...]:
19411941
for obj in objects
19421942
]
19431943

1944-
# Get argumets to pass into dask.array.core.unify_chunks
1944+
# Get arguments to pass into dask.array.core.unify_chunks
19451945
unify_chunks_args = []
19461946
sizes: dict[Hashable, int] = {}
19471947
for ds in datasets:

xarray/core/concat.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -455,7 +455,7 @@ def _dataset_concat(
455455
if (dim in coord_names or dim in data_names) and dim not in dim_names:
456456
datasets = [ds.expand_dims(dim) for ds in datasets]
457457

458-
# determine which variables to concatentate
458+
# determine which variables to concatenate
459459
concat_over, equals, concat_dim_lengths = _calc_concat_over(
460460
datasets, dim, dim_names, data_vars, coords, compat
461461
)

xarray/core/dataarray.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -2584,7 +2584,7 @@ def interpolate_na(
25842584
)
25852585

25862586
def ffill(self, dim: Hashable, limit: int = None) -> DataArray:
2587-
"""Fill NaN values by propogating values forward
2587+
"""Fill NaN values by propagating values forward
25882588
25892589
*Requires bottleneck.*
25902590
@@ -2609,7 +2609,7 @@ def ffill(self, dim: Hashable, limit: int = None) -> DataArray:
26092609
return ffill(self, dim, limit=limit)
26102610

26112611
def bfill(self, dim: Hashable, limit: int = None) -> DataArray:
2612-
"""Fill NaN values by propogating values backward
2612+
"""Fill NaN values by propagating values backward
26132613
26142614
*Requires bottleneck.*
26152615

xarray/core/dataset.py

+6-6
Original file line numberDiff line numberDiff line change
@@ -379,7 +379,7 @@ def _check_chunks_compatibility(var, chunks, preferred_chunks):
379379

380380

381381
def _get_chunk(var, chunks):
382-
# chunks need to be explicity computed to take correctly into accout
382+
# chunks need to be explicitly computed to take correctly into account
383383
# backend preferred chunking
384384
import dask.array as da
385385

@@ -1529,7 +1529,7 @@ def __setitem__(self, key: Hashable | list[Hashable] | Mapping, value) -> None:
15291529
except Exception as e:
15301530
if processed:
15311531
raise RuntimeError(
1532-
"An error occured while setting values of the"
1532+
"An error occurred while setting values of the"
15331533
f" variable '{name}'. The following variables have"
15341534
f" been successfully updated:\n{processed}"
15351535
) from e
@@ -1976,7 +1976,7 @@ def to_zarr(
19761976
metadata for existing stores (falling back to non-consolidated).
19771977
append_dim : hashable, optional
19781978
If set, the dimension along which the data will be appended. All
1979-
other dimensions on overriden variables must remain the same size.
1979+
other dimensions on overridden variables must remain the same size.
19801980
region : dict, optional
19811981
Optional mapping from dimension names to integer slices along
19821982
dataset dimensions to indicate the region of existing zarr array(s)
@@ -2001,7 +2001,7 @@ def to_zarr(
20012001
Set False to override this restriction; however, data may become corrupted
20022002
if Zarr arrays are written in parallel. This option may be useful in combination
20032003
with ``compute=False`` to initialize a Zarr from an existing
2004-
Dataset with aribtrary chunk structure.
2004+
Dataset with arbitrary chunk structure.
20052005
storage_options : dict, optional
20062006
Any additional parameters for the storage backend (ignored for local
20072007
paths).
@@ -4930,7 +4930,7 @@ def interpolate_na(
49304930
return new
49314931

49324932
def ffill(self, dim: Hashable, limit: int = None) -> Dataset:
4933-
"""Fill NaN values by propogating values forward
4933+
"""Fill NaN values by propagating values forward
49344934
49354935
*Requires bottleneck.*
49364936
@@ -4956,7 +4956,7 @@ def ffill(self, dim: Hashable, limit: int = None) -> Dataset:
49564956
return new
49574957

49584958
def bfill(self, dim: Hashable, limit: int = None) -> Dataset:
4959-
"""Fill NaN values by propogating values backward
4959+
"""Fill NaN values by propagating values backward
49604960
49614961
*Requires bottleneck.*
49624962

xarray/core/merge.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -587,7 +587,7 @@ def merge_core(
587587
Parameters
588588
----------
589589
objects : list of mapping
590-
All values must be convertable to labeled arrays.
590+
All values must be convertible to labeled arrays.
591591
compat : {"identical", "equals", "broadcast_equals", "no_conflicts", "override"}, optional
592592
Compatibility checks to use when merging variables.
593593
join : {"outer", "inner", "left", "right"}, optional

xarray/core/missing.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -573,7 +573,7 @@ def _localize(var, indexes_coords):
573573

574574
def _floatize_x(x, new_x):
575575
"""Make x and new_x float.
576-
This is particulary useful for datetime dtype.
576+
This is particularly useful for datetime dtype.
577577
x, new_x: tuple of np.ndarray
578578
"""
579579
x = list(x)
@@ -624,7 +624,7 @@ def interp(var, indexes_coords, method, **kwargs):
624624
kwargs["bounds_error"] = kwargs.get("bounds_error", False)
625625

626626
result = var
627-
# decompose the interpolation into a succession of independant interpolation
627+
# decompose the interpolation into a succession of independent interpolation
628628
for indexes_coords in decompose_interp(indexes_coords):
629629
var = result
630630

@@ -731,7 +731,7 @@ def interp_func(var, x, new_x, method, kwargs):
731731
for i in range(new_x[0].ndim)
732732
}
733733

734-
# if usefull, re-use localize for each chunk of new_x
734+
# if useful, re-use localize for each chunk of new_x
735735
localize = (method in ["linear", "nearest"]) and (new_x[0].chunks is not None)
736736

737737
# scipy.interpolate.interp1d always forces to float.
@@ -825,7 +825,7 @@ def _dask_aware_interpnd(var, *coords, interp_func, interp_kwargs, localize=True
825825

826826

827827
def decompose_interp(indexes_coords):
828-
"""Decompose the interpolation into a succession of independant interpolation keeping the order"""
828+
"""Decompose the interpolation into a succession of independent interpolation keeping the order"""
829829

830830
dest_dims = [
831831
dest[1].dims if dest[1].ndim > 0 else [dim]

xarray/core/rolling.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@
3333
Returns
3434
-------
3535
reduced : same type as caller
36-
New object with `{name}` applied along its rolling dimnension.
36+
New object with `{name}` applied along its rolling dimension.
3737
"""
3838

3939

@@ -767,7 +767,7 @@ def __init__(self, obj, windows, boundary, side, coord_func):
767767
exponential window along (e.g. `time`) to the size of the moving window.
768768
boundary : 'exact' | 'trim' | 'pad'
769769
If 'exact', a ValueError will be raised if dimension size is not a
770-
multiple of window size. If 'trim', the excess indexes are trimed.
770+
multiple of window size. If 'trim', the excess indexes are trimmed.
771771
If 'pad', NA will be padded.
772772
side : 'left' or 'right' or mapping from dimension to 'left' or 'right'
773773
coord_func : mapping from coordinate name to func.

xarray/core/variable.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -700,7 +700,7 @@ def _broadcast_indexes_outer(self, key):
700700
return dims, OuterIndexer(tuple(new_key)), None
701701

702702
def _nonzero(self):
703-
"""Equivalent numpy's nonzero but returns a tuple of Varibles."""
703+
"""Equivalent numpy's nonzero but returns a tuple of Variables."""
704704
# TODO we should replace dask's native nonzero
705705
# after https://github.com/dask/dask/issues/1076 is implemented.
706706
nonzeros = np.nonzero(self.data)

xarray/tests/test_backends.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -1937,7 +1937,7 @@ def test_chunk_encoding_with_dask(self):
19371937
with self.roundtrip(ds_chunk4) as actual:
19381938
assert (4,) == actual["var1"].encoding["chunks"]
19391939

1940-
# TODO: remove this failure once syncronized overlapping writes are
1940+
# TODO: remove this failure once synchronized overlapping writes are
19411941
# supported by xarray
19421942
ds_chunk4["var1"].encoding.update({"chunks": 5})
19431943
with pytest.raises(NotImplementedError, match=r"named 'var1' would overlap"):
@@ -2255,7 +2255,7 @@ def test_write_region_mode(self, mode):
22552255

22562256
@requires_dask
22572257
def test_write_preexisting_override_metadata(self):
2258-
"""Metadata should be overriden if mode="a" but not in mode="r+"."""
2258+
"""Metadata should be overridden if mode="a" but not in mode="r+"."""
22592259
original = Dataset(
22602260
{"u": (("x",), np.zeros(10), {"variable": "original"})},
22612261
attrs={"global": "original"},
@@ -2967,7 +2967,7 @@ def test_open_fileobj(self):
29672967
with pytest.raises(TypeError, match="not a valid NetCDF 3"):
29682968
open_dataset(f, engine="scipy")
29692969

2970-
# TOOD: this additional open is required since scipy seems to close the file
2970+
# TODO: this additional open is required since scipy seems to close the file
29712971
# when it fails on the TypeError (though didn't when we used
29722972
# `raises_regex`?). Ref https://github.com/pydata/xarray/pull/5191
29732973
with open(tmp_file, "rb") as f:

xarray/tests/test_calendar_ops.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ def test_convert_calendar_errors():
161161
with pytest.raises(ValueError, match="Argument `align_on` must be specified"):
162162
convert_calendar(src_nl, "360_day")
163163

164-
# Standard doesn't suuport year 0
164+
# Standard doesn't support year 0
165165
with pytest.raises(
166166
ValueError, match="Source time coordinate contains dates with year 0"
167167
):

0 commit comments

Comments
 (0)