Overview¶
Zarr Backend and Utilities¶
hdmf_zarr implements a Zarr backend for HDMF. Some of the key classes relevant for end-users are:
ZarrIO
implements an alternative storage backend to store data using HDMF via the Zarr library.NWBZarrIO
usesZarrIO
to define a Zarr backend store for integration with PyNWB to simplify the use of hdmf_zarr with NWB (similar toNWBHDF5IO
in PyNWB)utils
implements utility classes for theZarrIO
backend. For end-users theZarrDataIO
class is relevant for defining advanced I/O options for datasets.
Supported features¶
Write/Read of basic data types, strings and compound data types
Chunking
Compression and I/O filters
Links
Object references
Writing/loading namespaces/specifications
Iterative data write using
AbstractDataChunkIterator
Parallel write with
GenericDataChunkIterator
(since v0.4)Lazy load of datasets
Lazy load of datasets containing object references (since v0.4)
Known Limitations¶
Support for region references is not yet implemented. See also Region references for details.
The Zarr backend is currently experimental and may still change.
Attributes are stored as JSON documents in Zarr (using the DirectoryStore). As such, all attributes must be JSON serializable. The
ZarrIO
backend attempts to cast types to JSON serializable types as much as possible.Currently the
ZarrIO
backend supports Zarr’s directory-based storesDirectoryStore
,NestedDirectoryStore
, andTempStore
. Other Zarr stores could be added but will require proper treatment of links and references for those backends as links are not supported in Zarr (see zarr-python issues #389.Exporting of HDF5 files with external links is not yet fully implemented/tested. (see hdmf-zarr issue #49.
Special characters (e.g.,
:
,<
,>
,"
,/
,\
,|
,?
, or*
) may not be supported by all file systems (e.g., on Windows) and as such should not be used as part of the names of Datasets or Groups as Zarr needs to create folders on the filesystem for these objects.