Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

writing datetime64 in netCDF may produce badly formatted or unreadable files #9488

Closed
1 of 5 tasks
jfpiolle opened this issue Sep 12, 2024 · 13 comments · Fixed by #9497 · May be fixed by #9498
Closed
1 of 5 tasks

writing datetime64 in netCDF may produce badly formatted or unreadable files #9488

jfpiolle opened this issue Sep 12, 2024 · 13 comments · Fixed by #9497 · May be fixed by #9498

Comments

@jfpiolle
Copy link

What happened?

When writing datetime64 and NaT values to netCDF (using h5netcdf engine) using user defined dtype and _FillValue (to comply to some format requirements), I get different results depending on the precision of datetime64 values.

Here I'm writing datetime64 with milliseconds precision:

import xarray as xr
import numpy as np

# writing datetime64 with milliseconds
times = np.ma.array([
   np.datetime64('2010-01-01 12:00:00.005'), np.datetime64('NaT')]).astype('datetime64[ns]')

da = xr.DataArray(
    name='time',
    data=times,
    dims=('time',))
da.encoding['units'] = "seconds since 2010-01-01 00:00:00.000"
da.encoding['_FillValue'] = 1e20
da.encoding['dtype'] = np.float64
ds=xr.Dataset(coords={'time': da})
ds.to_netcdf('milliseconds.nc', engine='h5netcdf')

new_ds = xr.open_dataset('milliseconds.nc')
print(new_ds.time)

above works fine, and fill values are correctly written, as shown by the CDL output:

netcdf milliseconds {
dimensions:
	time = 2 ;
variables:
	double time(time) ;
		time:_FillValue = 1.e+20 ;
		string time:units = "seconds since 2010-01-01" ;
		string time:calendar = "proleptic_gregorian" ;
data:

 time = 43200.005, _ ;
}

However, if instead I write datetime64 rounded to seconds, the output result is wrong:

# writing datetime64 with seconds
times = np.ma.array([
    np.datetime64('2010-01-01 12:00:00'), np.datetime64('NaT')]).astype('datetime64[ns]')

da = xr.DataArray(
    name='time',
    data=times,
    dims=('time',))
da.encoding['units'] = "seconds since 2010-01-01 00:00:00.000"
da.encoding['_FillValue'] = 1e20
da.encoding['dtype'] = np.float64
ds=xr.Dataset(coords={'time': da})
ds.to_netcdf('seconds.nc', engine='h5netcdf')

new_ds = xr.open_dataset('seconds.nc')

The CDL output is as follow (note that -9.22337203685478e+18 is used instead of the required 1e20 as _FillValue):

netcdf seconds {
dimensions:
	time = 2 ;
variables:
	double time(time) ;
		time:_FillValue = 1.e+20 ;
		string time:units = "seconds since 2010-01-01" ;
		string time:calendar = "proleptic_gregorian" ;
data:

 time = 43200, -9.22337203685478e+18 ;
}

As the result, attempting to read the result file will fail:

Traceback (most recent call last):
  File "conversion.pyx", line 228, in pandas._libs.tslibs.conversion.cast_from_unit
OverflowError: Python int too large to convert to C long

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "timedeltas.pyx", line 377, in pandas._libs.tslibs.timedeltas._maybe_cast_from_unit
  File "conversion.pyx", line 230, in pandas._libs.tslibs.conversion.cast_from_unit
pandas._libs.tslibs.np_datetime.OutOfBoundsDatetime: cannot convert input -9.223372036854776e+18 with the unit 's'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/jfpiolle/mambaforge/envs/poetry/lib/python3.10/site-packages/xarray/coding/times.py", line 326, in decode_cf_datetime
    dates = _decode_datetime_with_pandas(flat_num_dates, units, calendar)
  File "/home/jfpiolle/mambaforge/envs/poetry/lib/python3.10/site-packages/xarray/coding/times.py", line 272, in _decode_datetime_with_pandas
    pd.to_timedelta(flat_num_dates.min(), time_units) + ref_date
  File "/home/jfpiolle/mambaforge/envs/poetry/lib/python3.10/site-packages/pandas/core/tools/timedeltas.py", line 223, in to_timedelta
    return _coerce_scalar_to_timedelta_type(arg, unit=unit, errors=errors)
  File "/home/jfpiolle/mambaforge/envs/poetry/lib/python3.10/site-packages/pandas/core/tools/timedeltas.py", line 233, in _coerce_scalar_to_timedelta_type
    result = Timedelta(r, unit)
  File "timedeltas.pyx", line 1896, in pandas._libs.tslibs.timedeltas.Timedelta.__new__
  File "timedeltas.pyx", line 356, in pandas._libs.tslibs.timedeltas.convert_to_timedelta64
  File "timedeltas.pyx", line 379, in pandas._libs.tslibs.timedeltas._maybe_cast_from_unit
pandas._libs.tslibs.np_datetime.OutOfBoundsTimedelta: Cannot cast -9.223372036854776e+18 from s to 'ns' without overflow.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/jfpiolle/mambaforge/envs/poetry/lib/python3.10/site-packages/xarray/coding/times.py", line 218, in _decode_cf_datetime_dtype
    result = decode_cf_datetime(example_value, units, calendar, use_cftime)
  File "/home/jfpiolle/mambaforge/envs/poetry/lib/python3.10/site-packages/xarray/coding/times.py", line 328, in decode_cf_datetime
    dates = _decode_datetime_with_cftime(
  File "/home/jfpiolle/mambaforge/envs/poetry/lib/python3.10/site-packages/xarray/coding/times.py", line 242, in _decode_datetime_with_cftime
    cftime.num2date(num_dates, units, calendar, only_use_cftime_datetimes=True)
  File "src/cftime/_cftime.pyx", line 627, in cftime._cftime.num2date
  File "src/cftime/_cftime.pyx", line 421, in cftime._cftime.cast_to_int
OverflowError: time values outside range of 64 bit signed integers

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/jfpiolle/mambaforge/envs/poetry/lib/python3.10/site-packages/xarray/conventions.py", line 450, in decode_cf_variables
    new_vars[k] = decode_cf_variable(
  File "/home/jfpiolle/mambaforge/envs/poetry/lib/python3.10/site-packages/xarray/conventions.py", line 291, in decode_cf_variable
    var = times.CFDatetimeCoder(use_cftime=use_cftime).decode(var, name=name)
  File "/home/jfpiolle/mambaforge/envs/poetry/lib/python3.10/site-packages/xarray/coding/times.py", line 992, in decode
    dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
  File "/home/jfpiolle/mambaforge/envs/poetry/lib/python3.10/site-packages/xarray/coding/times.py", line 228, in _decode_cf_datetime_dtype
    raise ValueError(msg)
ValueError: unable to decode time units 'seconds since 2010-01-01' with "calendar 'proleptic_gregorian'". Try opening your dataset with decode_times=False or installing cftime if it is not installed.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/jfpiolle/git/felyx_processor/test_time.py", line 39, in <module>
    new_ds = xr.open_dataset('seconds.nc')
  File "/home/jfpiolle/mambaforge/envs/poetry/lib/python3.10/site-packages/xarray/backends/api.py", line 588, in open_dataset
    backend_ds = backend.open_dataset(
  File "/home/jfpiolle/mambaforge/envs/poetry/lib/python3.10/site-packages/xarray/backends/h5netcdf_.py", line 418, in open_dataset
    ds = store_entrypoint.open_dataset(
  File "/home/jfpiolle/mambaforge/envs/poetry/lib/python3.10/site-packages/xarray/backends/store.py", line 46, in open_dataset
    vars, attrs, coord_names = conventions.decode_cf_variables(
  File "/home/jfpiolle/mambaforge/envs/poetry/lib/python3.10/site-packages/xarray/conventions.py", line 461, in decode_cf_variables
    raise type(e)(f"Failed to decode variable {k!r}: {e}") from e
ValueError: Failed to decode variable 'time': unable to decode time units 'seconds since 2010-01-01' with "calendar 'proleptic_gregorian'". Try opening your dataset with decode_times=False or installing cftime if it is not installed.

What did you expect to happen?

the content of the output in the second example (with values rounded to seconds) should be similar to the first one (minus the precision of the values of course) since same units, _FillValue and variable dtype are used.

I did not met this issue with earlier versions of xarray.

Minimal Complete Verifiable Example

see in issue description.

MVCE confirmation

  • Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • Complete example — the example is self-contained, including all data and the text of any traceback.
  • Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • New issue — a search of GitHub Issues suggests this is not a duplicate.
  • Recent environment — the issue occurs with the latest version of xarray and its dependencies.

Relevant log output

No response

Anything else we need to know?

No response

Environment

Using:

python                    3.10.14         hd12c33a_0_cpython    conda-forge
xarray                    2024.7.0                 pypi_0    pypi
numpy                     2.1.1                    pypi_0    pypi
h5netcdf                  1.3.0                    pypi_0    pypi
h5py                      3.11.0                   pypi_0    pypi
INSTALLED VERSIONS ------------------ commit: None python: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] python-bits: 64 OS: Linux OS-release: 5.4.0-150-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: fr_FR.UTF-8 LOCALE: ('fr_FR', 'UTF-8') libhdf5: 1.14.2 libnetcdf: None

xarray: 2024.7.0
pandas: 2.2.2
numpy: 2.1.1
scipy: 1.14.1
netCDF4: None
pydap: None
h5netcdf: 1.3.0
h5py: 3.11.0
zarr: None
cftime: 1.6.4
nc_time_axis: None
iris: None
bottleneck: None
dask: 2024.8.2
distributed: None
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: 2024.9.0
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 74.1.2
pip: 24.2
conda: None
pytest: 6.2.5
mypy: None
IPython: None
sphinx: None

@jfpiolle jfpiolle added bug needs triage Issue that has not been reviewed by xarray team member labels Sep 12, 2024
@jfpiolle jfpiolle changed the title wrting datetime64 in netCDF may produce unreable files wrting datetime64 in netCDF may produce badly formatted or unreadable files Sep 12, 2024
@kmuehlbauer
Copy link
Contributor

@jfpiolle

Thanks for the report. We've recently reworked the CF masking. Is it possible to check, if the latest version (released yesterday) still has this issue?

The other thing is, that nanosecond precision isn't possible with floating point on-disk values. See also #7817 Internally we use int64 which nicely fits NaT with NetCDF int64 default _FillValue.

@jfpiolle
Copy link
Author

@kmuehlbauer thanks for checking. I tried with your latest version (xarray-2024.9.0) and have the same failure. Also ignore the conversion to nanoseconds in my example, was playing around (in particular because of some warning with pandas), but in fact whether or not I convert to nanoseconds I do get the same behaviour.

@kmuehlbauer
Copy link
Contributor

Ok, @jfpiolle, thanks for testing this out.

If it is not too much of an effort for you, are you able to find a working version in the past? And if there is one, are you familiar with git-bisect?

I'll definitely have a look into this the next days.

@jfpiolle
Copy link
Author

@kmuehlbauer I am not familiar with git-bisect; however I can confirm it is working as expected with xarray-2023-12-0.

@kmuehlbauer
Copy link
Contributor

@jfpiolle Thanks for this information. It's really useful.

@kmuehlbauer
Copy link
Contributor

@jfpiolle I think I've found the root cause (or at least the point of failure).

The following was added in #8575 to improve handling of chunked times (cc @spencerkclark). I'm not sure if I'm missing something, but this cast should not be at this position in the code. I've tested this and removing these two lines makes the above example work.

if dtype is not None:
num = _cast_to_dtype_if_safe(num, dtype)

@jfpiolle Are you able to check if the created file conforms to your expectations, if you remove these two lines. I'm sure we can work out a solution but it can be tricky.

@spencerkclark This is another one of those tricky points we have to put on the list. Maybe you see a quick fix hiding around the corner?

@spencerkclark
Copy link
Member

Thanks @kmuehlbauer—I’ll try and take a closer look over the weekend.

@jfpiolle
Copy link
Author

@kmuehlbauer I confirm that if I remove l.805-806 it works again and the content of the created files is correct!

@kmuehlbauer
Copy link
Contributor

I’ll try and take a closer look over the weekend.

Thanks @spencerkclark, much appreciated, as I can't immediately see, in which cases these lines are needed.

@kmuehlbauer kmuehlbauer changed the title wrting datetime64 in netCDF may produce badly formatted or unreadable files writing datetime64 in netCDF may produce badly formatted or unreadable files Sep 14, 2024
spencerkclark added a commit to spencerkclark/xarray that referenced this issue Sep 14, 2024
spencerkclark added a commit to spencerkclark/xarray that referenced this issue Sep 14, 2024
@spencerkclark
Copy link
Member

See #9497 for a quick fix for this issue, though I think you are right @kmuehlbauer that we should rethink the casting in the lines you mentioned.

The motivation behind adding the casting in #8575 was to:

  1. Enable raising an error in the event that for chunked arrays an integer dtype was requested, but floating point values were required (since we cannot modify the encoding units like we can for times in memory).
  2. Enable raising an error in the event that the dtype encoding requested would lead to overflow (xr.to_netcdf() alters time dimension #8542).

In hindsight, I think there is a better way to address (1)—essentially we add a branch where we raise instead of modify the units if allow_units_modification is False. I believe this would be more robust than the current approach, which lets some issues slip through the cracks. It's a little tricky since we catch so many exceptions in that part of the code, but I'm sure we can find a way to ensure we raise an exception that gets through.

With regard to (2), we could decide it is enough of an edge case that we don't try to actively guard against it; for example in principle you could also try to encode ordinary int64 values as int16 values and experience silent overflow, and we don't currently guard against that. As you note, if we remove the casting it would also help address #9134.

@kmuehlbauer
Copy link
Contributor

@spencerkclark Thanks for taking the time to enjoy us with this enlightening ( ❤️ 💯 ) description of the different problems and their possible solutions. Maybe it might be useful to discuss ways to a more permanent solution in one of the next meetings?

@spencerkclark spencerkclark removed the needs triage Issue that has not been reviewed by xarray team member label Sep 15, 2024
@spencerkclark
Copy link
Member

Sure thing @kmuehlbauer—I'd be happy to discuss these broader issues in an upcoming meeting. I could try to make it to the next one—it looks like that would be on 9/25?

I went ahead and put together a sketch of the changes I proposed to eliminate the casting step in #9498, just to give a more concrete sense for what that would look like.

@kmuehlbauer
Copy link
Contributor

I could try to make it to the next one—it looks like that would be on 9/25?

Great, I've missed last meetings, so good occasion to catch up.

I went ahead and put together a sketch of the changes I proposed to eliminate the casting step in #9498, just to give a more concrete sense for what that would look like.

I didn't had a look yet, will do the next days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants