Skip to content

Commit

Permalink
Simplify dependencies, release prep for 3.1 (#221)
Browse files Browse the repository at this point in the history
* Update authors, changelog

* Add Python 3.12 to the pyproj classifiers

* Bump the version to 3.1.0

* Simplify dependencies. Make statsmodels optional.

* Add note, run statsmodels tests if available.

* Remove unneeded "pass" statements, unnecessary whitespace

* Add Python 3.12 to the hatch test matrix

* Update changelog
  • Loading branch information
BSchilperoort committed Sep 13, 2024
1 parent b0fcb50 commit 1a1e7a0
Show file tree
Hide file tree
Showing 18 changed files with 55 additions and 89 deletions.
2 changes: 1 addition & 1 deletion .bumpversion.cfg
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[bumpversion]
current_version = 3.0.3
current_version = 3.1.0
commit = True
tag = True

Expand Down
1 change: 1 addition & 0 deletions AUTHORS.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,4 @@ Authors
* Bas des Tombe - https://github.com/bdestombe
* Bart Schilperoort - https://github.com/BSchilperoort
* Karl Lapo - https://github.com/klapo
* David Rautenberg - https://github.com/davechris
17 changes: 12 additions & 5 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
@@ -1,18 +1,25 @@

Changelog
=========
3.0.4 (2024-08-30)
3.1.0 (2024-09-13)
---

Added

* support for Python 3.12.
* AP sensing .tra support, as the reference temperature sensor data by this device in only logged in .tra and not in the .xml log files.
added functions in io/apsensing.py to read .tra files if they are in the same directory as the .xml files.
* more test data from AP sensing device N4386B, which do also contain their .tra log files

Fixed

* device ID bug for APSensing. Device ID is N4386B instead of C320. C320 was an arbitrary name given for the wellbore by the user.

Added
Changed

* the `verify_timedeltas` keyword argument is now optional when merging two single ended datasets.
* removed `statsmodels` as a dependency. It is now optional, and only used for testing the `wls_sparse` solver

* more test data from AP sensing device N4386B, which do also contain their .tra log files
* AP sensing .tra support, as the reference temperature sensor data by this device in only logged in .tra and not in the .xml log files.
added functions in io/apsensing.py to read .tra files if they are in the same directory as the .xml files.

3.0.3 (2024-04-18)
---
Expand Down
2 changes: 1 addition & 1 deletion CITATION.cff
Original file line number Diff line number Diff line change
Expand Up @@ -22,5 +22,5 @@ doi: "10.5281/zenodo.1410097"
license: "BSD-3-Clause"
repository-code: "https://github.com/dtscalibration/python-dts-calibration"
title: "Python distributed temperature sensing calibration"
version: "v3.0.3"
version: "v3.1.0"
url: "https://python-dts-calibration.readthedocs.io"
1 change: 0 additions & 1 deletion CONTRIBUTING.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,6 @@ To set up `python-dts-calibration` for local development:

pip install -e ".[dev]"


4. When you're done making changes, make sure the code follows the right style, that all tests pass, and that the docs build with the following commands::

hatch run format
Expand Down
4 changes: 2 additions & 2 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,9 @@ Overview
:alt: PyPI Package latest release
:target: https://pypi.python.org/pypi/dtscalibration

.. |commits-since| image:: https://img.shields.io/github/commits-since/dtscalibration/python-dts-calibration/v3.0.3.svg
.. |commits-since| image:: https://img.shields.io/github/commits-since/dtscalibration/python-dts-calibration/v3.1.0.svg
:alt: Commits since latest release
:target: https://github.com/dtscalibration/python-dts-calibration/compare/v1.1.1...main
:target: https://github.com/dtscalibration/python-dts-calibration/compare/v3.1.0...main

.. |wheel| image:: https://img.shields.io/pypi/wheel/dtscalibration.svg
:alt: PyPI Wheel
Expand Down
3 changes: 1 addition & 2 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,14 +40,13 @@
spelling_show_suggestions = True
spelling_lang = "en_US"


source_suffix = [".rst", ".md"]
master_doc = "index"
project = "dtscalibration"
year = str(date.today().year)
author = "Bas des Tombe and Bart Schilperoort"
copyright = f"{year}, {author}"
version = release = "3.0.3"
version = release = "3.1.0"

pygments_style = "trac"
templates_path = [".", sphinx_autosummary_accessors.templates_path]
Expand Down
15 changes: 7 additions & 8 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ disable = true # Requires confirmation when publishing to pypi.

[project]
name = "dtscalibration"
version = "3.0.3"
version = "3.1.0"
description = "Load Distributed Temperature Sensing (DTS) files, calibrate the temperature and estimate its uncertainty."
readme = "README.rst"
license = "BSD-3-Clause"
Expand Down Expand Up @@ -48,19 +48,17 @@ classifiers = [
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Utilities",
]
dependencies = [
"numpy>=1.22.4, <=2.0.1", # >= 1.22 for quantile method support in xarray. https://github.com/statsmodels/statsmodels/issues/9333
"dask",
"numpy",
"xarray[accel]",
"dask[distributed]",
"pandas",
"xarray[parallel]", # numbagg (llvmlite) is a pain to install with pip
"bottleneck", # speeds up Xarray
"flox", # speeds up Xarray
"pyyaml>=6.0.1",
"xmltodict",
"scipy",
"statsmodels",
"matplotlib",
"netCDF4>=1.6.4",
"nc-time-axis>=1.4.1" # plot dependency of xarray
Expand Down Expand Up @@ -128,10 +126,11 @@ build = [
features = ["dev"]

[[tool.hatch.envs.matrix_test.matrix]]
python = ["3.9", "3.10", "3.11"]
python = ["3.9", "3.10", "3.11", "3.12"]

[tool.hatch.envs.matrix_test.scripts]
test = ["pytest ./src/ ./tests/",] # --doctest-modules
fast-test = ["pytest ./tests/ -m \"not slow\"",]

[tool.pytest.ini_options]
testpaths = ["tests"]
Expand Down
14 changes: 12 additions & 2 deletions src/dtscalibration/calibrate_utils.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
import numpy as np
import scipy.sparse as sp
import statsmodels.api as sm
import xarray as xr
from scipy.sparse import linalg as ln

Expand Down Expand Up @@ -1673,7 +1672,6 @@ def construct_submatrices_matching_sections(x, ix_sec, hix, tix, nt, trans_att):
Zero_d_eq12 : sparse matrix
Zero in EQ1 and EQ2
"""
# contains all indices of the entire fiber that either are using for
# calibrating to reference temperature or for matching sections. Is sorted.
Expand Down Expand Up @@ -1988,6 +1986,9 @@ def wls_sparse(
):
"""Weighted least squares solver.
Note: during development this solver was compared to the statsmodels solver. To
enable the comparison tests again, install `statsmodels` before running pytest.
If some initial estimate x0 is known and if damp == 0, one could proceed as follows:
- Compute a residual vector r0 = b - A*x0.
- Use LSQR to solve the system A*dx = r0.
Expand Down Expand Up @@ -2124,6 +2125,15 @@ def wls_stats(X, y, w=1.0, calc_cov=False, x0=None, return_werr=False, verbose=F
p_cov : ndarray
The covariance of the solution.
"""
try:
import statsmodels.api as sm
except ModuleNotFoundError as err:
msg = (
"Statsmodels has to be installed for this function.\n"
"Install it with `pip install statsmodels`."
)
raise ModuleNotFoundError(msg) from err

y = np.asarray(y)
w = np.asarray(w)

Expand Down
4 changes: 0 additions & 4 deletions src/dtscalibration/calibration/section_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,6 @@ def validate_no_overlapping_sections(sections: dict[str, list[slice]]):
assert all_start_stop_startsort_flat == sorted(
all_start_stop_startsort_flat
), "Sections contains overlapping stretches"
pass


def validate_sections_definition(sections: dict[str, list[slice]]):
Expand Down Expand Up @@ -123,7 +122,6 @@ def validate_sections(ds: xr.Dataset, sections: dict[str, list[slice]]):
f"Better define the {k} section. You tried {vi}, "
"which is not within the x-dimension"
)
pass


def ufunc_per_section(
Expand Down Expand Up @@ -176,7 +174,6 @@ def ufunc_per_section(
TODO: Spend time on creating a slice instead of appendng everything\
to a list and concatenating after.
Returns:
--------
Expand Down Expand Up @@ -241,7 +238,6 @@ def ufunc_per_section(
>>> ix_loc = d.ufunc_per_section(sections, x_indices=True)
Note:
----
If `self[label]` or `self[subtract_from_label]` is a Dask array, a Dask
Expand Down
5 changes: 0 additions & 5 deletions src/dtscalibration/dts_accessor.py
Original file line number Diff line number Diff line change
Expand Up @@ -335,7 +335,6 @@ def ufunc_per_section(
TODO: Spend time on creating a slice instead of appendng everything\
to a list and concatenating after.
Returns:
--------
dict
Expand Down Expand Up @@ -408,7 +407,6 @@ def ufunc_per_section(
>>> ix_loc = d.ufunc_per_section(sections=sections, x_indices=True)
Notes:
------
If `self[label]` or `self[subtract_from_label]` is a Dask array, a Dask
Expand Down Expand Up @@ -494,7 +492,6 @@ def calibrate_single_ended(
I(x,t) = \ln{\left(\frac{P_+(x,t)}{P_-(x,t)}\right)}
.. math::
C(t) = \ln{\left(\frac{\eta_-(t)K_-/\lambda_-^4}{\eta_+(t)K_+/\lambda_+^4}\right)}
Expand Down Expand Up @@ -879,7 +876,6 @@ def calibrate_double_ended(
:math:`T_\mathrm{F}` and :math:`T_\mathrm{B}` as discussed in
Section 7.2 [1]_ .
Parameters
----------
p_val : array-like, optional
Expand Down Expand Up @@ -1991,7 +1987,6 @@ def average_monte_carlo_single_ended(
Four types of averaging are implemented. Please see Example Notebook 16.
Parameters
----------
result : xr.Dataset
Expand Down
4 changes: 0 additions & 4 deletions src/dtscalibration/dts_accessor_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -393,7 +393,6 @@ def get_netcdf_encoding(
TODO: Truncate precision to XML precision per data variable
Parameters
----------
zlib
Expand Down Expand Up @@ -905,7 +904,6 @@ def shift_double_ended(
There is no interpolation, as this would alter the accuracy.
Parameters
----------
ds : DataSore object
Expand Down Expand Up @@ -999,7 +997,6 @@ def suggest_cable_shift_double_ended(
anti-Stokes The bottom plot is generated that shows the two objective
functions
Parameters
----------
ds : Xarray Dataset
Expand Down Expand Up @@ -1229,7 +1226,6 @@ def ufunc_per_section_helper(
>>> ix_loc = ufunc_per_section_helper(x_coords=d.x)
Note:
----
If `dataarray` or `subtract_from_dataarray` is a Dask array, a Dask
Expand Down
4 changes: 1 addition & 3 deletions src/dtscalibration/io/apsensing.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ def read_apsensing_files(
load_in_memory : {'auto', True, False}
If 'auto' the Stokes data is only loaded to memory for small files
load_tra_arrays : bool
If true and tra files available, the tra array data will imported.
If true and tra files available, the tra array data will imported.
Current implementation is limited to in-memory reading only.
kwargs : dict-like, optional
keyword-arguments are passed to DataStore initialization
Expand Down Expand Up @@ -526,7 +526,6 @@ def read_single_tra_file(tra_filepath, load_tra_arrays):
- log_ratio and loss(attenuation) calculated by device
- PT100 sensor data (optional only if sensors are connnected to device)
Parameters
----------
tra_filepathlist: list of str
Expand Down Expand Up @@ -585,7 +584,6 @@ def append_to_data_vars_structure(data_vars, data_dict_list, load_tra_arrays):
append data from .tra files to data_vars structure.
(The data_vars structure is later on used to initialize the x-array dataset).
Parameters
----------
data_vars: dictionary containing *.xml data
Expand Down
9 changes: 8 additions & 1 deletion src/dtscalibration/variance_helpers.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,14 @@ def variance_stokes_exponential_helper(

if use_statsmodels:
# returns the same answer with statsmodel
import statsmodels.api as sm
try:
import statsmodels.api as sm
except ModuleNotFoundError as err:
msg = (
"Statsmodels has to be installed for this function.\n"
"Install it with `pip install statsmodels`."
)
raise ModuleNotFoundError(msg) from err

X = sp.coo_matrix(
(data, coords), shape=(nt * n_locs, ddof), dtype=float, copy=False
Expand Down
3 changes: 0 additions & 3 deletions src/dtscalibration/variance_stokes.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,6 @@ def variance_stokes_constant(st, sections, acquisitiontime, reshape_residuals=Tr
st_var = a * ds.st + b
where `a` and `b` are constants. Requires reference sections at
beginning and end of the fiber, to have residuals at high and low
intensity measurements.
Expand Down Expand Up @@ -168,7 +167,6 @@ def variance_stokes_exponential(
st_var = a * ds.st + b
where `a` and `b` are constants. Requires reference sections at
beginning and end of the fiber, to have residuals at high and low
intensity measurements.
Expand Down Expand Up @@ -361,7 +359,6 @@ def variance_stokes_linear(
st_var = a * ds.st + b
where `a` and `b` are constants. Requires reference sections at
beginning and end of the fiber, to have residuals at high and low
intensity measurements.
Expand Down
2 changes: 0 additions & 2 deletions tests/test_averaging.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,6 @@ def assert_almost_equal_verbose(actual, desired, verbose=False, **kwargs):

desired2 = np.broadcast_to(desired, actual.shape)
np.testing.assert_almost_equal(actual, desired2, err_msg=m, **kwargs)
pass


def test_average_measurements_single_ended():
Expand Down Expand Up @@ -111,7 +110,6 @@ def get_section_indices(x_da, sec):
ci_avg_time_flag2=True,
ci_avg_time_isel=range(3),
)
pass


def test_average_measurements_double_ended():
Expand Down
Loading

0 comments on commit 1a1e7a0

Please sign in to comment.