Releases: xcube-dev/xcube
0.7.0
Changes in 0.7.0
- Introduced abstract base class
xcube.util.jsonschema.JsonObject
which
is now the super class of many classes that have JSON object representations.
In Jupyter notebooks, instances of such classes are automatically rendered
as JSON trees. xcube gen2
CLI tool can now have multiple-v
options, e.g.-vvv
will now output detailed requests and responses.- Added new Jupyter notebooks in
examples/notebooks/gen2
for the data cube generators in the packagexcube.core.gen2
. - Fixed a problem in
JsonArraySchema
that occurred if a valid
instance wasNone
. A TypeErrorTypeError: 'NoneType' object is not iterable
was
raised in this case. - The S3 data store
xcube.core.store.stores.s3.S3DataStore
now implements thedescribe_data()
method.
It therefore can also be used as a data store from which data is queried and read. - The
xcube gen2
data cube generator tool has been hidden from
the set of "official" xcube tools. It is considered as an internal tool
that is subject to change at any time until its interface has stabilized.
Please refer toxcube gen2 --help
for more information. - Added
coords
property toDatasetDescriptor
class.
Thedata_vars
property of theDatasetDescriptor
class is now a dictionary. - Added
chunks
property toVariableDescriptor
class. - Removed function
reproject_crs_to_wgs84()
and tests (#375) because- it seemed to be no longer be working with GDAL 3.1+;
- there was no direct use in xcube itself;
- xcube plans to get rid of GDAL dependencies.
- CLI tool
xcube gen2
may now also ingest non-cube datasets. - Fixed unit tests broken by accident. (#396)
- Added new context manager
xcube.util.observe_dask_progress()
that can be used
to observe tasks that known to be dominated by Dask computations:with observe_dask_progress('Writing dataset', 100): dataset.to_zarr(store)
- The xcube normalisation process, which ensures that a dataset meets the requirements
of a cube, internally requested a lot of data, causing the process to be slow and
expensive in terms of memory consumption. This problem was resolved by avoiding to
read in these large amounts of data. (#392)
0.6.2.dev4
Changes in 0.6.2.dev4
- Internal changes (do not include in final CHANGES.md):
- Fixed de-serialisation of objects for additional_properties = True
- Added VariableDescriptor.ndim property
- Warn, if additional properties passed to ctors
- Update of xcube generator NBs
0.6.2.dev3
Changes in 0.6.2.dev3
- Introduced abstract base class
xcube.util.jsonschema.JsonObject
which
is now the super class of many classes that have JSON object representations.
In Jupyter notebooks, instances of such classes are automatically rendered
as JSON trees. xcube gen2
CLI tool can now have multiple-v
options, e.g.-vvv
will now output detailed requests and responses.- Added new Juypter notebooks in
examples/notebooks/gen2
for the data cube generators in the packagexcube.core.gen2
.
0.6.2.dev2
Changes in 0.6.2.dev2
- Fixed a problem in
JsonArraySchema
that occurred if a valid
instance wasNone
. A TypeErrorTypeError: 'NoneType' object is not iterable
was
raised in this case.
0.6.2.dev1
Includes xcube gen2 improvements:
- Adapted store pool config to new "cost_params" structure
- Now printing remote output and remote traceback if any on error
0.6.2.dev0
Changes in 0.6.2 (in development)
-
The S3 data store
xcube.core.store.stores.s3.S3DataStore
now implements thedescribe_data()
method.
It therefore can also be used as a data store from which data is queried and read. -
The
xcube gen2
data cube generator tool has been hidden from
the set of "official" xcube tools. It is considered as an internal tool
that is subject to change at any time until its interface has stabilized.
Please refer toxcube gen2 --help
for more information. -
Added
coords
property toDatasetDescriptor
class.
Thedata_vars
property of theDatasetDescriptor
class is now a dictionary. -
Removed function
reproject_crs_to_wgs84()
and tests (#375) because- it seemed to be no longer be working with GDAL 3.1+;
- there was no direct use in xcube itself;
- xcube plans to get rid of GDAL dependencies.
-
CLI tool
xcube gen2
may now also ingest non-cube datasets. -
Fixed unit tests broken by accident. (#396)
-
Added new context manager
xcube.util.observe_dask_progress()
that can be used
to observe tasks that known to be dominated by Dask computations:with observe_dask_progress('Writing dataset', 100): dataset.to_zarr(store)
-
The xcube normalisation process, which ensures that a dataset meets the requirements
of a cube, internally requested a lot of data, causing the process to be slow and
expensive in terms of memory consumption. This problem was resolved by avoiding to
read in these large amounts of data. (#392)
0.6.1
Changes in 0.6.1
All changes relate to maintenance of xcube's Python environment requirements in envrionment.yml
:
0.6.0
Changes in 0.6.0
Enhancements
-
Added four new Jupyter Notebooks about xcube's new Data Store Framework in
examples/notebooks/datastores
. -
CLI tool
xcube io dump
now has new--config
and--type
options. (#370) -
New function
xcube.core.store.get_data_store()
and new classxcube.core.store.DataStorePool
allow for maintaining a set of pre-configured data store instances. This will be used
in future xcube tools that utilise multiple data stores, e.g. "xcube gen", "xcube serve". (#364) -
Replaced the concept of
type_id
used by severalxcube.core.store.DataStore
methods
by a more flexibletype_specifier
. Documentation is provided indocs/source/storeconv.md
.The
DataStore
interface changed as follows:- class method
get_type_id()
replaced byget_type_specifiers()
replacesget_type_id()
; - new instance method
get_type_specifiers_for_data()
; - replaced keyword-argument in
get_data_ids()
; - replaced keyword-argument in
has_data()
; - replaced keyword-argument in
describe_data()
; - replaced keyword-argument in
get_search_params_schema()
; - replaced keyword-argument in
search_data()
; - replaced keyword-argument in
get_data_opener_ids()
.
The
WritableDataStore
interface changed as follows:- replaced keyword-argument in
get_data_writer_ids()
.
- class method
-
The JSON Schema classes in
xcube.util.jsonschema
have been extended:date
anddate-time
formats are now validated along with the rest of the schema- the
JsonDateSchema
andJsonDatetimeSchema
subclasses ofJsonStringSchema
have been introduced,
including a non-standard extension to specify date and time limits
-
Extended
xcube.core.store.DataStore
docstring to include a basic convention for store
open parameters. (#330) -
Added documentation for the use of the open parameters passed to
xcube.core.store.DataOpener.open_data()
.
Fixes
-
xcube serve
no longer crashes, if configuration is lacking aStyles
entry. -
xcube gen
can now interpretstart_date
andstop_date
from NetCDF dataset attributes.
This is relevant for usingxcube gen
for Sentinel-2 Level 2 data products generated and
provided by Brockmann Consult. (#352) -
Fixed both
xcube.core.dsio.open_cube()
andopen_dataset()
which failed with message
"ValueError: group not found at path ''"
if called with a bucket URL but no credentials given
in case the bucket is not publicly readable. (#337)
The fix for that issue now requires an additionals3_kwargs
parameter when accessing datasets
in public buckets:from xcube.core.dsio import open_cube public_url = "https://s3.eu-central-1.amazonaws.com/xcube-examples/OLCI-SNS-RAW-CUBE-2.zarr" public_cube = open_cube(public_url, s3_kwargs=dict(anon=True))
-
xcube now requires
s3fs >= 0.5
which implies using faster async I/O when accessing object storage. -
xcube now requires
gdal >= 3.0
. (#348) -
xcube now only requires
matplotlib-base
package rather thanmatplotlib
. (#361)
Other
- Restricted
s3fs
version in envrionment.yml in order to use a version which can handle pruned xcube datasets.
This restriction will be removed once changes in zarr PR zarr-developers/zarr-python#650
are merged and released. (#360) - Added a note in the
xcube chunk
CLI help, saying that there is a possibly more efficient way
to (re-)chunk datasets through the dedicated tool "rechunker", see https://rechunker.readthedocs.io
(thanks to Ryan Abernathey for the hint). (#335) - For
xcube serve
dataset configurations whereFileSystem: obs
, users must now also
specifyAnonymous: True
for datasets in public object storage buckets. For example:- Identifier: "OLCI-SNS-RAW-CUBE-2" FileSystem: "obs" Endpoint: "https://s3.eu-central-1.amazonaws.com" Path: "xcube-examples/OLCI-SNS-RAW-CUBE-2.zarr" Anyonymous: true ... - ...
- In
environment.yml
, removed unnecessary explicit dependencies onproj4
andpyproj
and restrictedgdal
version to >=3.0,<3.1.
0.5.1
0.5.0
Changes in 0.5.0
New in 0.5.0
-
xcube gen2 CONFIG
will generate a cube from a data input store and a user given cube configuration.
It will write the resulting cube in a user defined output store.- Input Stores: CCIODP, CDS, SentinelHub
- Output stores: memory, directory, S3
-
xcube serve CUBE
will now use the last path component ofCUBE
as dataset title. -
xcube serve
can now be run with AWS credentials (#296).- In the form
xcube serve --config CONFIG
, aDatasets
entry inCONFIG
may now contain the two new keysAccessKeyId: ...
andSecretAccessKey: ...
given thatFileSystem: obs
. - In the form
xcube serve --aws-prof PROFILE CUBE
the cube stored in bucket with URLCUBE
will be accessed using the
credentials found in section[PROFILE]
of your~/.aws/credentials
file. - In the form
xcube serve --aws-env CUBE
the cube stored in bucket with URLCUBE
will be accessed using the
credentials found in environment variablesAWS_ACCESS_KEY_ID
and
AWS_SECRET_ACCESS_KEY
.
- In the form
Enhancements in 0.5.0
-
Added possibility to specify packing of variables within the configuration of
xcube gen
(#269). The user now may specify a different packing variables,
which might be useful for reducing the storage size of the datacubes.
Currently it is only implemented for zarr format.
This may be done by passing the parameters for packing as the following:output_writer_params: packing: analysed_sst: scale_factor: 0.07324442274239326 add_offset: -300.0 dtype: 'uint16' _FillValue: 0.65535
-
Example configurations for
xcube gen2
were added.
Fixes
-
From 0.4.1: Fixed time-series performance drop (#299).
-
Fixed
xcube gen
CLI tool to correctly insert time slices into an
existing cube stored as Zarr (#317). -
When creating an ImageGeom from a dataset, correct the height if it would
otherwise give a maximum latitude >90°. -
Disable the display of warnings in the CLI by default, only showing them if
a--warnings
flag is given. -
xcube has been extended by a new Data Store Framework (#307).
It is provided by thexcube.core.store
package.
It's usage is currently documented only in the form of Jupyter Notebook examples,
seeexamples/store/*.ipynb
. -
During the development of the new Data Store Framework, some
utility packages have been added:xcube.util.jsonschema
- classes that represent JSON Schemas for types null, boolean,
number, string, object, and array. Schema instances are used for JSON validation,
and object marshalling.xcube.util.assertions
- numerousassert_*
functions that are used for function
parameter validation. All functions raiseValueError
in case an assertion is not met.xcube.util.ipython
- functions that can be called for better integration of objects with
Jupyter Notebooks.
-
Fixed a regression when running "xcube serve" with cube path as parameter (#314)
-
From 0.4.3: Extended
xcube serve
by reverse URL prefix option. -
From 0.4.1: Fixed time-series performance drop (#299).