0.6.0
Changes in 0.6.0
Enhancements
-
Added four new Jupyter Notebooks about xcube's new Data Store Framework in
examples/notebooks/datastores
. -
CLI tool
xcube io dump
now has new--config
and--type
options. (#370) -
New function
xcube.core.store.get_data_store()
and new classxcube.core.store.DataStorePool
allow for maintaining a set of pre-configured data store instances. This will be used
in future xcube tools that utilise multiple data stores, e.g. "xcube gen", "xcube serve". (#364) -
Replaced the concept of
type_id
used by severalxcube.core.store.DataStore
methods
by a more flexibletype_specifier
. Documentation is provided indocs/source/storeconv.md
.The
DataStore
interface changed as follows:- class method
get_type_id()
replaced byget_type_specifiers()
replacesget_type_id()
; - new instance method
get_type_specifiers_for_data()
; - replaced keyword-argument in
get_data_ids()
; - replaced keyword-argument in
has_data()
; - replaced keyword-argument in
describe_data()
; - replaced keyword-argument in
get_search_params_schema()
; - replaced keyword-argument in
search_data()
; - replaced keyword-argument in
get_data_opener_ids()
.
The
WritableDataStore
interface changed as follows:- replaced keyword-argument in
get_data_writer_ids()
.
- class method
-
The JSON Schema classes in
xcube.util.jsonschema
have been extended:date
anddate-time
formats are now validated along with the rest of the schema- the
JsonDateSchema
andJsonDatetimeSchema
subclasses ofJsonStringSchema
have been introduced,
including a non-standard extension to specify date and time limits
-
Extended
xcube.core.store.DataStore
docstring to include a basic convention for store
open parameters. (#330) -
Added documentation for the use of the open parameters passed to
xcube.core.store.DataOpener.open_data()
.
Fixes
-
xcube serve
no longer crashes, if configuration is lacking aStyles
entry. -
xcube gen
can now interpretstart_date
andstop_date
from NetCDF dataset attributes.
This is relevant for usingxcube gen
for Sentinel-2 Level 2 data products generated and
provided by Brockmann Consult. (#352) -
Fixed both
xcube.core.dsio.open_cube()
andopen_dataset()
which failed with message
"ValueError: group not found at path ''"
if called with a bucket URL but no credentials given
in case the bucket is not publicly readable. (#337)
The fix for that issue now requires an additionals3_kwargs
parameter when accessing datasets
in public buckets:from xcube.core.dsio import open_cube public_url = "https://s3.eu-central-1.amazonaws.com/xcube-examples/OLCI-SNS-RAW-CUBE-2.zarr" public_cube = open_cube(public_url, s3_kwargs=dict(anon=True))
-
xcube now requires
s3fs >= 0.5
which implies using faster async I/O when accessing object storage. -
xcube now requires
gdal >= 3.0
. (#348) -
xcube now only requires
matplotlib-base
package rather thanmatplotlib
. (#361)
Other
- Restricted
s3fs
version in envrionment.yml in order to use a version which can handle pruned xcube datasets.
This restriction will be removed once changes in zarr PR zarr-developers/zarr-python#650
are merged and released. (#360) - Added a note in the
xcube chunk
CLI help, saying that there is a possibly more efficient way
to (re-)chunk datasets through the dedicated tool "rechunker", see https://rechunker.readthedocs.io
(thanks to Ryan Abernathey for the hint). (#335) - For
xcube serve
dataset configurations whereFileSystem: obs
, users must now also
specifyAnonymous: True
for datasets in public object storage buckets. For example:- Identifier: "OLCI-SNS-RAW-CUBE-2" FileSystem: "obs" Endpoint: "https://s3.eu-central-1.amazonaws.com" Path: "xcube-examples/OLCI-SNS-RAW-CUBE-2.zarr" Anyonymous: true ... - ...
- In
environment.yml
, removed unnecessary explicit dependencies onproj4
andpyproj
and restrictedgdal
version to >=3.0,<3.1.