Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add max_channel column for phy interface #961

Open
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

bendichter
Copy link
Contributor

Some older Phy-based NWB files, e.g. https://neurosift.app/?p=/nwb&url=https://api.dandiarchive.org/api/assets/b4a15b14-3a7b-49ae-b9ba-fa2c4d8f6c29/download/&dandisetId=000053&dandisetVersion=0.210819.0345 do not have any information that could associate a each unit to an electrode (or physical location). Without this information it is not possible to map a unit to 3D coordinates nor brain area, which has limited the utility of the data for secondary analysis. This is an attempt to remedy this by getting the templates from the Phy output files, calculating the peak channel of each template, and adding that to as a property of the units table.

@CodyCBakerPhD CodyCBakerPhD requested review from h-mayorquin and removed request for CodyCBakerPhD July 14, 2024 22:03
Copy link
Collaborator

@h-mayorquin h-mayorquin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When the nwbfile has electrodes, the max_channel is stored as a reference to the electrodes table

if property in ["max_channel", "max_electrode"] and nwbfile.electrodes is not None:
data_to_add[property].update(table=nwbfile.electrodes)

But this does not happen for the test that you added because we are only writing a unit table in the nwbfile there. So the max_channel is just a normal int column.

For both cases I feel that the description added to the units table is sort of wrong:

max_channel="The recording channel id with the largest amplitude.",
halfwidth="The full-width half maximum of the negative peak computed on the maximum channel.",

This is pointing out to an electrode table index, the channel ids of the original extractor are most likely lost in the first case and meaningless when only Phy is written as in your test. What do you think?

In general I am wondering about the case when you also add a recorder. I am not sure the mapping is done right. Something like in the docstring here:

https://neuroconv--961.org.readthedocs.build/en/961/conversion_examples_gallery/combinations/spikeglx_and_phy.html

I am not sure the references to the electrodes will be mapped to the right electrode. What do you think?

from pynwb import NWBHDF5IO
from spikeinterface.extractors.nwbextractors import read_nwbfile
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This confuse me for a bit. I think it would be better t ouse read_nwb_sorting instead of read_nwbfile.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am using read_nwbfile because I want to read the data in to NWBFile object and confirm that the max_channel column was properly written to the units table. This seemed like a convenient way to read the data but I agree it's a bit strange to rely on SpikeInterface for this function.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, Ben, my confusion was worse than that. Sadly, we have that function name duplicated in that space and I thought you were calling this function (which is at the bottom of the same file):

https://github.com/h-mayorquin/spikeinterface/blob/2085cbe9a7fb74ce5d9c2c831611eb2a596de686/src/spikeinterface/extractors/nwbextractors.py#L1367-L1368

@@ -23,6 +27,23 @@ def get_source_schema(cls) -> dict:
] = "Path to the output Phy folder (containing the params.py)."
return source_schema

def get_max_channel(self):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docstring? It would be useful here to describe some of the Phy charateristics. Like, I read from the code below that the templates are stored unwithened, the axis operation would be clearer if you wrote the template shape somewhere (I think its shape is (num_templates, num_samples, num_channels), right? Plus, the definition of cluster id.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree this could use a docstring

write_as: Literal["units", "processing"] = "units",
units_name: str = "units",
units_description: str = "Imported from Phy",
include_max_channel: bool = True,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is nice that this inherits the doc from the parent:

image

But the new include_max_channel is not documented.

@bendichter
Copy link
Contributor Author

@h-mayorquin I agree that these indices don't have much meaning if the electrodes table is missing. I think sorting interfaces should write the electrodes table.

idx_max_channel = np.argmax(max_over_time, axis=1)
max_channel = channel_map[idx_max_channel].ravel()

return max_channel
Copy link
Collaborator

@h-mayorquin h-mayorquin Aug 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, two other questions here:

  1. Aren't the templates scaled? I am thinking on the amplitudes.npy . This would mean the argmax operation might not be right.
  2. Also, is max_channel meanigful if spikes are negative? I think here it is consistent but aren't most people interested on the absolute value largest. On the templates data base we use "best channel" which did not have that load.

Now I am aware that you are using machinery that is already there in the add_units_table but I wanted to point this out.

https://phy.readthedocs.io/en/latest/sorting_user_guide/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Scaled how? In a way that would affect which channel is selected?
  2. yeah, good point, the argmax(abs()) might be better

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I shared this with a friend that uses Phy and he flagged that. Reading the documentation I feel less certain.

We could do a roun-trip with spikeinterface artificial data and the export to phy functionality to see if your function gets it right.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is some code by Nick Stteinmetz that calculates templates max without using the amplitudes (but they don't do the de-whitening step that you do)

https://github.com/cortex-lab/spikes/blob/master/analysis/templatePositionsAmplitudes.m

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think he does. Look at winv

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But the block where the max channel is calculated take the temps as they come directly from the input:

https://github.com/cortex-lab/spikes/blob/fcea2b20e736b533e5baf612752b66121a691128/analysis/templatePositionsAmplitudes.m#L64-L71

Am I missing something?

@pauladkisson
Copy link
Member

Tried on schneider lab data and found reasonable levels of agreement between max_channel and their 'best channel'.

exact channel agreement = 58.55%
mean absolute deviation of channel id = 8.32
max_channel_vs_best_ch

@pauladkisson
Copy link
Member

But, I do get an error when I try to call the to_dataframe() method on the units table:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
Cell In[17], [line 9](vscode-notebook-cell:?execution_count=17&line=9)
      [6](vscode-notebook-cell:?execution_count=17&line=6) # max_channel = np.array(nwbfile.units.max_channel.data)
      [7](vscode-notebook-cell:?execution_count=17&line=7) ch = np.array(nwbfile.units.ch.data)
----> [9](vscode-notebook-cell:?execution_count=17&line=9) units_table = nwbfile.units.to_dataframe()

File /opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/utils.py:668, in docval.<locals>.dec.<locals>.func_call(*args, **kwargs)
    [666](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/utils.py:666) def func_call(*args, **kwargs):
    [667](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/utils.py:667)     pargs = _check_args(args, kwargs)
--> [668](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/utils.py:668)     return func(args[0], **pargs)

File /opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1225, in DynamicTable.to_dataframe(self, **kwargs)
   [1217](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1217) """
   [1218](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1218) Produce a pandas DataFrame containing this table's data.
   [1219](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1219) 
   (...)
   [1222](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1222) If exclude is None, this is equivalent to table.get(slice(None, None, None), index=False).
   [1223](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1223) """
   [1224](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1224) arg = slice(None, None, None)  # select all rows
-> [1225](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1225) sel = self.__get_selection_as_dict(arg, df=True, **kwargs)
   [1226](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1226) ret = self.__get_selection_as_df(sel)
   [1227](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1227) return ret

File /opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1088, in DynamicTable.__get_selection_as_dict(self, arg, df, index, exclude, **kwargs)
   [1086](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1086)     raise IndexError(msg) from ie
   [1087](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1087) else:  # pragma: no cover
-> [1088](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1088)     raise ie

File /opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1063, in DynamicTable.__get_selection_as_dict(self, arg, df, index, exclude, **kwargs)
   [1061](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1061)             ret[name] = col.get(arg, df=False, index=True, **kwargs)
   [1062](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1062)         else:
-> [1063](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1063)             ret[name] = col.get(arg, df=df, index=index, **kwargs)
   [1064](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1064)     return ret
   [1065](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1065) # if index is out of range, different errors can be generated depending on the dtype of the column
   [1066](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1066) # but despite the differences, raise an IndexError from that error

File /opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1462, in DynamicTableRegion.get(self, arg, index, df, **kwargs)
   [1460](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1460) uniq = np.unique(ret)
   [1461](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1461) lut = {val: i for i, val in enumerate(uniq)}
-> [1462](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1462) values = self.table.get(uniq, df=df, index=index, **kwargs)
   [1463](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1463) if df:
   [1464](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1464)     ret = values.iloc[[lut[i] for i in ret]]

File /opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1025, in DynamicTable.get(self, key, default, df, index, **kwargs)
   [1021](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1021)         return default
   [1022](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1022) else:
   [1023](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1023)     # index by int, list, np.ndarray, or slice -->
   [1024](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1024)     # return pandas Dataframe or lists consisting of one or more rows
-> [1025](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1025)     sel = self.__get_selection_as_dict(key, df, index, **kwargs)
   [1026](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1026)     if df:
   [1027](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1027)         # reformat objects to fit into a pandas DataFrame
   [1028](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1028)         if np.isscalar(key):

File /opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1088, in DynamicTable.__get_selection_as_dict(self, arg, df, index, exclude, **kwargs)
   [1086](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1086)     raise IndexError(msg) from ie
   [1087](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1087) else:  # pragma: no cover
-> [1088](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1088)     raise ie

File /opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1053, in DynamicTable.__get_selection_as_dict(self, arg, df, index, exclude, **kwargs)
   [1050](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1050) ret = OrderedDict()
   [1051](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1051) try:
   [1052](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1052)     # index with a python slice or single int to select one or multiple rows
-> [1053](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1053)     ret['id'] = self.id[arg]
   [1054](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1054)     for name in self.colnames:
   [1055](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/common/table.py:1055)         if name in exclude:

File /opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/container.py:940, in Data.__getitem__(self, args)
    [939](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/container.py:939) def __getitem__(self, args):
--> [940](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/container.py:940)     return self.get(args)

File /opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/container.py:948, in Data.get(self, args)
    [945](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/container.py:945) if isinstance(self.data, h5py.Dataset) and isinstance(args, np.ndarray):
    [946](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/container.py:946)     # This is needed for h5py 2.9 compatibility
    [947](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/container.py:947)     args = args.tolist()
--> [948](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/hdmf/container.py:948) return self.data[args]

File h5py/_objects.pyx:54, in h5py._objects.with_phil.wrapper()

File h5py/_objects.pyx:55, in h5py._objects.with_phil.wrapper()

File /opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/h5py/_hl/dataset.py:758, in Dataset.__getitem__(self, args, new_dtype)
    [756](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/h5py/_hl/dataset.py:756) if self._fast_read_ok and (new_dtype is None):
    [757](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/h5py/_hl/dataset.py:757)     try:
--> [758](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/h5py/_hl/dataset.py:758)         return self._fast_reader.read(args)
    [759](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/h5py/_hl/dataset.py:759)     except TypeError:
    [760](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/schneider_lab_to_nwb_env/lib/python3.12/site-packages/h5py/_hl/dataset.py:760)         pass  # Fall back to Python read pathway below

File h5py/_selector.pyx:361, in h5py._selector.Reader.read()

File h5py/_selector.pyx:212, in h5py._selector.Selector.apply_args()

IndexError: Fancy indexing out of range for (0-0)

Copy link

codecov bot commented Sep 17, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 90.47%. Comparing base (36464df) to head (33ba125).
Report is 1 commits behind head on main.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #961      +/-   ##
==========================================
+ Coverage   90.44%   90.47%   +0.03%     
==========================================
  Files         129      129              
  Lines        8055     8077      +22     
==========================================
+ Hits         7285     7308      +23     
+ Misses        770      769       -1     
Flag Coverage Δ
unittests 90.47% <100.00%> (+0.03%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
...onv/datainterfaces/ecephys/phy/phydatainterface.py 100.00% <100.00%> (ø)

... and 1 file with indirect coverage changes

@h-mayorquin
Copy link
Collaborator

That's great, you said that you have location data for the channels right? I think that if we plot the distance between the locations of the calculated best_channel and the user defined best channels the effect on the plot should be even clearer.

I also tested artificial data and obtained matching results from @bendichter method:

https://gist.github.com/h-mayorquin/11a2e30a9fe1dad45c2075697f2e44e6

I think this is ready to go, we should just add a docstring. I also would like to not use the spikeinterface method to read the nwbfile for the reasons mentioned above.

@pauladkisson
Copy link
Member

I think this is ready to go

What about the to_dataframe() error?

@h-mayorquin
Copy link
Collaborator

I think this is ready to go

What about the to_dataframe() error?

Let me see if I can reproduce wit the mock sorting.

@h-mayorquin
Copy link
Collaborator

@pauladkisson

This works for me without error

from neuroconv.tools.testing import MockSortingInterface, MockRecordingInterface

recording_interface = MockRecordingInterface(durations=[1.0])
nwbfile = recording_interface.create_nwbfile()

sorting_interface = MockSortingInterface(num_units=3)
sorting_interface.sorting_extractor.set_property(key="max_channel", values=[0, 1, 2])

metadata = sorting_interface.get_metadata()
sorting_interface.add_to_nwbfile(nwbfile, metadata=metadata)


from pynwb import NWBHDF5IO

with NWBHDF5IO("test.nwb", mode="w") as io:
    io.write(nwbfile)

with NWBHDF5IO("test.nwb", mode="r") as io:
    read_nwbfile = io.read()
    units_df = read_nwbfile.units.to_dataframe()

units_df

I wonder what's going on. Did you write the electrodes table before?

@pauladkisson
Copy link
Member

Ok, I tracked down the problem to https://github.com/catalystneuro/neuroconv/tree/add_max_channel_for_phy/src/neuroconv/tools/spikeinterface/spikeinterface.py#L1476

I'm not sure what this code is supposed to do exactly, but if max_channel is named something different (ex. 'my_max_channel') it no longer fails.

@h-mayorquin, any idea what's going on here?

@h-mayorquin
Copy link
Collaborator

Ok, I tracked down the problem to https://github.com/catalystneuro/neuroconv/tree/add_max_channel_for_phy/src/neuroconv/tools/spikeinterface/spikeinterface.py#L1476

I'm not sure what this code is supposed to do exactly, but if max_channel is named something different (ex. 'my_max_channel') it no longer fails.

@h-mayorquin, any idea what's going on here?

The code is creating the link.

Can you open an issue with the script that reproduces this error? If you can use the mock interfaces great but I can also download the data from google drive if that is not possible.

@pauladkisson
Copy link
Member

I can't replicate with a mock, but here is the code that gives an error with Schneider Lab data

from neuroconv.tools.testing import MockSortingInterface, MockRecordingInterface
from neuroconv.datainterfaces import PhySortingInterface

def main():
    recording_interface = MockRecordingInterface(durations=[1.0])
    nwbfile = recording_interface.create_nwbfile()

    folder_path = "/Volumes/T7/CatalystNeuro/Schneider/Schneider sample Data/Processed Ephys/m69_2023-10-31_17-24-15_Day1_A1"
    sorting_interface = PhySortingInterface(folder_path=folder_path)

    metadata = sorting_interface.get_metadata()
    sorting_interface.add_to_nwbfile(nwbfile, metadata=metadata)


    from pynwb import NWBHDF5IO

    with NWBHDF5IO("test.nwb", mode="w") as io:
        io.write(nwbfile)

    with NWBHDF5IO("test.nwb", mode="r") as io:
        read_nwbfile = io.read()
        units_df = read_nwbfile.units.to_dataframe()

    print(units_df)

if __name__ == "__main__":
    main()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants