Use a Miniconda3 package maintainer tool, download for Linux.
(base) conda install -c conda-forge mamba
(base) mamba env create -n activestorage -f environment.yml
conda activate activestorage
pip install -e .
pytest -n 2
Python versions supported: (3.9 EOL but no more testing with it), 3.10, 3.11, 3.12, 3.13. Fully compatible with numpy >=2.0.0
.
This package provides
- the class
Active
, which is a shimmy to NetCDF4 (and HDF5) storage via kerchunk metadata and the zarr indexer. It does not however, use zarr for the actual read. - The actual reads are done in the methods of
storage.py
orreductionist.py
, which are called from within anActive.__getitem__
.
Example usage is in the file tests/test_harness.py
, but it's basically this simple:
active = Active(self.testfile, "data")
active.method = "mean"
result = active[0:2, 4:6, 7:9]
where result
will be the mean of the appropriate slice of the hyperslab in var
.
There are some (relatively obsolete) documents from our exploration of zarr internals in the docs4understanding, but they are not germane to the usage of the Active class.
PyActiveStorage is designed to interact with various storage backends.
The storage backend is specified using the storage_type
argument to Active
constructor.
There are two main integration points for a storage backend:
#. Load netCDF metadata
#. Perform a reduction on a storage chunk (the reduce_chunk
function)
The default storage backend is a local file.
To use a local file, use a storage_type
of None
, which is its default value.
netCDF metadata is loaded using the netCDF4 library.
The chunk reductions are implemented in activestorage.storage
using NumPy.
We now have support for Active runs with netCDF4 files on S3, from PR 89.
To achieve this we integrate with Reductionist, an S3 Active Storage Server.
Reductionist is typically deployed "near" to an S3-compatible object store and provides an API to perform numerical reductions on object data.
To use Reductionist, use a storage_type
of s3
.
To load metadata, netCDF files are opened using s3fs
, with h5netcdf
used to put the open file (which is nothing more than a memory view of the netCDF file) into an hdf5/netCDF-like object format.
Chunk reductions are implemented in activestorage.reductionist
, with each operation resulting in an API request to the Reductionist server.
From there on, Active
works as per normal.
We have written unit and integration tests, and employ a coverage measurement tool - Codecov, see PyActiveStorage test coverage with current coverage of 87%; our Continuous Integration (CI) testing is deployed on Github Actions, and we have nightly tests that run the entire testing suite, to be able to detect any issues introduced by updated versions of our dependencies. Github Actions (GA) tests also test the integration of various storage types we currently support; as such, we have dedicated tests that test Active Storage with S3 storage (by creating and running a MinIO client from within the test, and deploying and testing PyActiveStorage with data shipped to the S3 client).
Of particular interest are performance tests, and we have started using tests that measure system run time and resident memory (RES); we use pytest-monitor
for this purpose, inside the GA CI testing environemnt. So far, performance testing showed us that HDF5 chunking is paramount for performance ie
a large number of small HDF5 chunks leads to very long system run times, and high memory consumption; however, larger HDF5 chunks significantly increase performance – as an example, running PyActiveStorage on an uncompressed netCDF4 file of size 1GB on disk (500x500x500 data elements, float64 each), with optimal HDF5 chunking (eg 75 data elements per chunk, on each dimesnional axis) takes order 0.1s for a local POSIX storage and 0.3s for the case when the file is on an S3 server; the same run needs only order approx. 100MB of RES memory for each of the two storage options see test result; the same types of runs with much smaller HDF5 chunks (eg 20x smaller) will need order a factor of 300 more time to complete, and order a few GB of RES memory.
- netCDF4 1.1GB file (on disk, local)
- no compression, no filters
- data shape = (500, 500, 500)
- chunks = (75, 75, 75)
(only test module for imports and fixtures)
Ran 30 instances = 101-102M max RES
Ran 30 instances = 103M max RES
30 tests = 107-108M max RES
So kerchunking only takes 1-2M of RES memory; Active in total ~7M RES memory!
- netCDF4 1.1GB file (on disk, local)
- no compression, no filters
- data shape = (500, 500, 500)
- chunks = (25, 25, 25)
Ran 30 instances = 111M max RES
30 tests = 114-115M max RES
Kerchunking needs 9MB and Active v1 in total 13-14M of max RES memory
- netCDF4 1.1GB file (on disk, local)
- no compression, no filters
- data shape = (500, 500, 500)
- chunks = (8, 8, 8)
Ran 30 instances = 306M max RES
30 tests = 307M max RES
Kerchunking needs ~200MB same as Active in total - kerchunking is memory-dominant in the case of tiny HDF5 chunks.
- HDF5 chunking is make or break
- Memory appears to grow expentially of form
F(M) = M0 + C x M ^ b
whereM0
is the startup memory (module imports, test fixtures etc - here, about 100MB RES),C
is a constant (probably close to 1), andb
is the factor at which chunks decrease in size (along one axis, eg 3 here)
See available Sphinx documentation. To build locally the documentation run:
sphinx-build -Ea doc doc/build
We monitor test coverage via the Codecov app and employ a bot that displays coverage changes introduced in every PR; the bot posts a comment directly to the PR, in which coverage variations introduced by the proposed code changes are displayed.