Skip to content

Commit

Permalink
Restructuring the package into submodules + major refactorings (#355)
Browse files Browse the repository at this point in the history
* Re-organize files into submodules, remove nm_ prefix from submodules

* Re-organize files into submodules, remove nm_ prefix from submodules

* Crate "filter" module, include KalmanFilter external code, drop FilterPy dependency

* Refactor __init__.py files

* Drop hatch, statsmodels deps, cleanup pyproject.toml

* Fix circular import bug

* refactor RawDataGenerator

* fix filter imports

* add custom keyboard listener, drop pynput dependency

* Create ValidationError factory function

* Change "nm_channels" to "channels", fix most tests

* Change scikit-image.measure.label for scipy equivalent, drop scikit-image dependency

* simplify __init__.py files and ignore specific Ruff errors for these files

* Renamed settings entries so that all feature settings end in "_settings"

* Fix tests and examples, change timestamp calculation for RawDataGenerator

* add number of peaks features to sharpwaves (#357)

* Fix rebase problems

* Move `database` module into `utils` subpackage

* fix database import

* fix timing test

* - Bump package version to 0.06dev
- Bump Python version to 3.11
- Bump pybispectra version to 1.2.0
- Fix testing workflow

---------

Co-authored-by: timonmerk <[email protected]>
Co-authored-by: timonmerk <[email protected]>
  • Loading branch information
3 people authored Sep 19, 2024
1 parent bd06edc commit cfa030c
Show file tree
Hide file tree
Showing 87 changed files with 3,832 additions and 2,543 deletions.
48 changes: 27 additions & 21 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,21 +3,21 @@ on:
push:
branches:
- main
- '**'
- "**"
paths-ignore:
- 'docs/**'
- '*.md'
- '*.rst'
- '*.txt'
- "docs/**"
- "*.md"
- "*.rst"
- "*.txt"
pull_request:
branches:
- main
- '*.x'
- "*.x"
paths-ignore:
- 'docs/**'
- '*.md'
- '*.rst'
- '*.txt'
- "docs/**"
- "*.md"
- "*.rst"
- "*.txt"
jobs:
tests:
name: ${{ matrix.platform.name }} Python ${{ matrix.python }}
Expand All @@ -33,21 +33,27 @@ jobs:
- os: windows-latest
name: Windows
python:
- '3.10'
- "3.11"
- "3.12"
steps:
- uses: actions/checkout@v4
- name: Install and cache Linux packages
if: ${{ runner.os == 'Linux' }}
uses: tecolicom/actions-use-apt-tools@v1
uses: awalsh128/cache-apt-pkgs-action@latest
with:
tools: binutils qtbase5-dev qt5-qmake libpugixml1v5
- name: Set up Python with uv
uses: drivendataorg/setup-python-uv-action@v1
packages: binutils qtbase5-dev qt5-qmake libpugixml1v5
- name: Install uv
uses: astral-sh/setup-uv@v3
with:
python-version: ${{ matrix.python }}
cache: packages
cache-dependency-path: pyproject.toml
- name: Install test dependencies
run: uv pip install .[test]
version: "0.4.12"
enable-cache: true
cache-dependency-glob: "**/pyproject.toml"
- name: Install Python and dependencies
run: |
uv python install ${{ matrix.python }}
uv venv
uv pip install .[test]
- name: Run tests
run: pytest -n auto tests/
run: |
${{ (runner.os == 'Windows' && 'echo "Activating environment" && .venv\Scripts\activate') || 'echo "Activating environment" &&source .venv/bin/activate' }}
pytest -n auto tests/
12 changes: 6 additions & 6 deletions docs/source/usage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -103,22 +103,22 @@ The following possible parametrization option are available:
- no rereferencing being used for the particular channel
- *none*

The **nm_channels** can either be created as a *.tsv* file, or as a pandas DataFrame.
There are some helper functions that let you create the nm_channels without much effort:
The **channels** can either be created as a *.tsv* file, or as a pandas DataFrame.
There are some helper functions that let you create the channels without much effort:

.. code-block:: python
nm_channels = nm_define_nmchannels.get_default_channels_from_data(data, car_rereferencing=True)
channels = nm_define_nmchannels.get_default_channels_from_data(data, car_rereferencing=True)
When setting up the :class:`~nm_stream_abc`, `nm_settings` and `nm_channels` can also be defined and passed to the init function:
When setting up the :class:`~nm_stream_abc`, `nm_settings` and `channels` can also be defined and passed to the init function:

.. code-block:: python
import py_neuromodulation as nm
stream = nm.Stream(
sfreq=sfreq,
nm_channels=nm_channels,
channels=channels,
settings=settings,
)
Expand Down Expand Up @@ -465,7 +465,7 @@ Coherence
~~~~~~~~~

**coherence** can be calculated for channel pairs that are passed as a list of lists.
Each list contains the channels specified in *nm_channels*.
Each list contains the channels specified in *channels*.
The mean and/or maximum in a specific frequency band can be calculated.
The maximum for all frequency bands can also be estimated.

Expand Down
20 changes: 8 additions & 12 deletions examples/plot_0_first_demo.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,6 @@

import py_neuromodulation as nm

from py_neuromodulation import nm_analysis, nm_define_nmchannels, nm_plots, NMSettings

# %%
# Data Simulation
# ---------------
Expand Down Expand Up @@ -47,7 +45,7 @@ def generate_random_walk(NUM_CHANNELS, TIME_DATA_SAMPLES):
# %%
# Now let’s define the necessary setup files we will be using for data
# preprocessing and feature estimation. Py_neuromodualtion is based on two
# parametrization files: the *nm_channels.tsv* and the *nm_setting.json*.
# parametrization files: the *channels.tsv* and the *default_settings.json*.
#
# nm_channels
# ~~~~~~~~~~~
Expand Down Expand Up @@ -93,21 +91,19 @@ def generate_random_walk(NUM_CHANNELS, TIME_DATA_SAMPLES):
# DataFrame. There are some helper functions that let you create the
# nm_channels without much effort:

nm_channels = nm_define_nmchannels.get_default_channels_from_data(
data, car_rereferencing=True
)
nm_channels = nm.utils.get_default_channels_from_data(data, car_rereferencing=True)

nm_channels

# %%
# Using this function default channel names and a common average re-referencing scheme is specified.
# Alternatively the *nm_define_nmchannels.set_channels* function can be used to pass each column values.
# Alternatively the *define_nmchannels.set_channels* function can be used to pass each column values.
#
# nm_settings
# -----------
# Next, we will initialize the nm_settings dictionary and use the default settings, reset them, and enable a subset of features:

settings = NMSettings.get_fast_compute()
settings = nm.NMSettings.get_fast_compute()


# %%
Expand Down Expand Up @@ -140,7 +136,7 @@ def generate_random_walk(NUM_CHANNELS, TIME_DATA_SAMPLES):

stream = nm.Stream(
settings=settings,
nm_channels=nm_channels,
channels=nm_channels,
verbose=True,
sfreq=sfreq,
line_noise=50,
Expand All @@ -155,8 +151,8 @@ def generate_random_walk(NUM_CHANNELS, TIME_DATA_SAMPLES):
# There is a lot of output, which we could omit by verbose being False, but let's have a look what was being computed.
# We will therefore use the :class:`~nm_analysis` class to showcase some functions. For multi-run -or subject analysis we will pass here the feature_file "sub" as default directory:

analyzer = nm_analysis.FeatureReader(
feature_dir=stream.PATH_OUT, feature_file=stream.PATH_OUT_folder_name
analyzer = nm.FeatureReader(
feature_dir=stream.out_dir_root, feature_file=stream.experiment_name
)

# %%
Expand All @@ -178,7 +174,7 @@ def generate_random_walk(NUM_CHANNELS, TIME_DATA_SAMPLES):
analyzer.plot_all_features(ch_used="ch1")

# %%
nm_plots.plot_corr_matrix(
nm.analysis.plot_corr_matrix(
figsize=(25, 25),
show_plot=True,
feature=analyzer.feature_arr,
Expand Down
42 changes: 17 additions & 25 deletions examples/plot_1_example_BIDS.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
# *Electrocorticography is superior to subthalamic local field potentials
# for movement decoding in Parkinson’s disease*
# (`Merk et al. 2022 <https://elifesciences.org/articles/75126>_`).
# The dataset is available `here <https://doi.org/10.7910/DVN/IO2FLM>`_.
# The dataset is available `here <https://doi.org/10.7910/DVN/io.FLM>`_.
#
# For simplicity one example subject is automatically shipped within
# this repo at the *py_neuromodulation/data* folder, stored in
Expand All @@ -20,14 +20,6 @@
import matplotlib.pyplot as plt

import py_neuromodulation as nm
from py_neuromodulation import (
nm_analysis,
nm_decode,
nm_define_nmchannels,
nm_IO,
nm_plots,
NMSettings,
)

# %%
# Let's read the example using `mne_bids <https://mne.tools/mne-bids/stable/index.html>`_.
Expand All @@ -40,7 +32,7 @@
PATH_BIDS,
PATH_OUT,
datatype,
) = nm_IO.get_paths_example_data()
) = nm.io.get_paths_example_data()

(
raw,
Expand All @@ -49,9 +41,9 @@
line_noise,
coord_list,
coord_names,
) = nm_IO.read_BIDS_data(PATH_RUN=PATH_RUN)
) = nm.io.read_BIDS_data(PATH_RUN=PATH_RUN)

nm_channels = nm_define_nmchannels.set_channels(
channels = nm.utils.set_channels(
ch_names=raw.ch_names,
ch_types=raw.get_channel_types(),
reference="default",
Expand All @@ -75,7 +67,7 @@
plt.xlim(0, 20)

plt.subplot(122)
for idx, ch_name in enumerate(nm_channels.query("used == 1").name):
for idx, ch_name in enumerate(channels.query("used == 1").name):
plt.plot(raw.times, data[idx, :] + idx * 300, label=ch_name)
plt.legend(bbox_to_anchor=(1, 0.5), loc="center left")
plt.title("ECoG + STN-LFP time series")
Expand All @@ -84,17 +76,17 @@
plt.xlim(0, 20)

# %%
settings = NMSettings.get_fast_compute()
settings = nm.NMSettings.get_fast_compute()

settings.features.welch = True
settings.features.fft = True
settings.features.bursts = True
settings.features.sharpwave_analysis = True
settings.features.coherence = True

settings.coherence.channels = [("LFP_RIGHT_0", "ECOG_RIGHT_0")]
settings.coherence_settings.channels = [("LFP_RIGHT_0", "ECOG_RIGHT_0")]

settings.coherence.frequency_bands = ["high_beta", "low_gamma"]
settings.coherence_settings.frequency_bands = ["high beta", "low gamma"]
settings.sharpwave_analysis_settings.estimator["mean"] = []
settings.sharpwave_analysis_settings.sharpwave_features.enable_all()
for sw_feature in settings.sharpwave_analysis_settings.sharpwave_features.list_all():
Expand All @@ -103,7 +95,7 @@
# %%
stream = nm.Stream(
sfreq=sfreq,
nm_channels=nm_channels,
channels=channels,
settings=settings,
line_noise=line_noise,
coord_list=coord_list,
Expand All @@ -114,18 +106,18 @@
# %%
features = stream.run(
data=data,
out_path_root=PATH_OUT,
folder_name=RUN_NAME,
out_dir=PATH_OUT,
experiment_name=RUN_NAME,
save_csv=True,
)

# %%
# Feature Analysis Movement
# -------------------------
# The obtained performances can now be read and visualized using the :class:`nm_analysis.Feature_Reader`.
# The obtained performances can now be read and visualized using the :class:`analysis.Feature_Reader`.

# initialize analyzer
feature_reader = nm_analysis.FeatureReader(
feature_reader = nm.analysis.FeatureReader(
feature_dir=PATH_OUT,
feature_file=RUN_NAME,
)
Expand Down Expand Up @@ -165,7 +157,7 @@
)

# %%
nm_plots.plot_corr_matrix(
nm.analysis.plot_corr_matrix(
feature=feature_reader.feature_arr.filter(regex="ECOG_RIGHT_0"),
ch_name="ECOG_RIGHT_0_avgref",
feature_names=list(
Expand All @@ -186,14 +178,14 @@
#
# Here, we show an example using a sklearn linear regression model. The used labels came from a continuous grip force movement target, named "MOV_RIGHT".
#
# First we initialize the :class:`~nm_decode.Decoder` class, which the specified *validation method*, here being a simple 3-fold cross validation,
# First we initialize the :class:`~decode.Decoder` class, which the specified *validation method*, here being a simple 3-fold cross validation,
# the evaluation metric, used machine learning model, and the channels we want to evaluate performances for.
#
# There are many more implemented methods, but we will here limit it to the ones presented.

model = linear_model.LinearRegression()

feature_reader.decoder = nm_decode.Decoder(
feature_reader.decoder = nm.analysis.Decoder(
features=feature_reader.feature_arr,
label=feature_reader.label,
label_name=feature_reader.label_name,
Expand All @@ -219,7 +211,7 @@
df_per

# %%
ax = nm_plots.plot_df_subjects(
ax = nm.analysis.plot_df_subjects(
df_per,
x_col="sub",
y_col="performance_test",
Expand Down
6 changes: 3 additions & 3 deletions examples/plot_2_example_add_feature.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,13 @@

# %%
# In this example we will demonstrate how a new feature can be added to the existing feature pipeline.
# This can be done by creating a new feature class that implements the protocol class :class:`~nm_features.NMFeature`
# and registering it with the :func:`~nm_features.AddCustomFeature` function.
# This can be done by creating a new feature class that implements the protocol class :class:`~features.NMFeature`
# and registering it with the :func:`~features.AddCustomFeature` function.


# %%
# Let's create a new feature class called `ChannelMean` that calculates the mean signal for each channel.
# We can optinally make it inherit from :class:`~nm_features.NMFeature` but as long as it has an adequate constructor
# We can optinally make it inherit from :class:`~features.NMFeature` but as long as it has an adequate constructor
# and a `calc_feature` method with the appropriate signatures it will work.
# The :func:`__init__` method should take the settings, channel names and sampling frequency as arguments.
# The `calc_feature` method should take the data and a dictionary of features as arguments and return the updated dictionary.
Expand Down
Loading

0 comments on commit cfa030c

Please sign in to comment.