Overview

Zarr Backend and Utilities

hdmf_zarr implements a Zarr backend for HDMF. Some of the key classes relevant for end-users are:

  • ZarrIO implements an alternative storage backend to store data using HDMF via the Zarr library.

  • NWBZarrIO uses ZarrIO to define a Zarr backend store for integration with PyNWB to simplify the use of hdmf_zarr with NWB (similar to NWBHDF5IO in PyNWB)

  • utils implements utility classes for the ZarrIO backend. For end-users the ZarrDataIO class is relevant for defining advanced I/O options for datasets.

Supported features

  • Write/Read of basic data types, strings and compound data types

  • Chunking

  • Compression and I/O filters

  • Links.

  • Object references

  • Writing/loading namespaces/specifications

  • Iterative data write using AbstractDataChunkIterator

Known Limitations

  • Support for region references is not yet implemented. See also Region references for details.

  • The Zarr backend is currently experimental and may still change.

  • Attributes are stored as JSON documents in Zarr (using the DirectoryStore). As such, all attributes must be JSON serializable. The ZarrIO backend attempts to cast types to JSON serializable types as much as possible.

  • Currently the ZarrIO backend uses Zarr’s DirectoryStore only. Other Zarr stores could be added but will require proper treatment of links and references for those backends as links are not supported in Zarr (see zarr-python issues #389.

  • Exporting of HDF5 files with external links is not yet fully implemented/tested. (see hdmf-zarr issue #49.

  • Object references are currently always resolved on read (as are links) rather than being loaded lazily (see hdmf-zarr issue #50.

  • Special characters (e.g., :, <, >, ", /, \, |, ?, or *) may not be supported by all file systems (e.g., on Windows) and as such should not be used as part of the names of Datasets or Groups as Zarr needs to create folders on the filesystem for these objects.