You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I used my own pre-trained Wavegram_Logmel_Cnn14 model to inference.py file with the following error:
Traceback (most recent call last):
File "test.py", line 948, in <module>
audio_tagging(args)
File "test.py", line 892, in audio_tagging
batch_output_dict = model(waveform, None)
File "/root/miniconda3/envs/sed/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/root/miniconda3/envs/sed/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 159, in forward
return self.module(*inputs[0], **kwargs[0])
File "/root/miniconda3/envs/sed/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "test.py", line 563, in forward
x = torch.cat((x, a1), dim=1)
RuntimeError: Sizes of tensors must match except in dimension 2. Got 68 and 69 (The offending index is 0)
The text was updated successfully, but these errors were encountered:
This happens because or round/flooring that occurs in when creating the wavegrams and spectrograms, try interpolating your inputs onto different grids - for instance I got an error when my input was (60, 220500), but not for (60, 180000).
Hi,
I used my own pre-trained Wavegram_Logmel_Cnn14 model to inference.py file with the following error:
The text was updated successfully, but these errors were encountered: