Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

infer_type fails with Tuple index out of range error. #69

Open
sheiksadique opened this issue Oct 18, 2023 · 9 comments
Open

infer_type fails with Tuple index out of range error. #69

sheiksadique opened this issue Oct 18, 2023 · 9 comments

Comments

@sheiksadique
Copy link
Collaborator

sheiksadique commented Oct 18, 2023

When trying to generate NIR graph in the paper/02_cnn/mint_sinabs_to_nir.py, the script fails on trying to save the graph.

The change that needed to be accommodated from sinabs end was to add the input_shape parameter. I added the parameter and set it to None.

sinabs version: latest commit on the dev/nir branch.
sinabs-dynapcnn: from pip
(NOTE: These two packages need to be installed fully (pip install . and not pip install -e .)and do not work with dev install.)

NIRTorch: latest commit on main branch.
nir: latest commit on paper branch.

I also tried just running the infer_types method and also ran into the same error message. So believe this is the root of the problem.

@sheiksadique
Copy link
Collaborator Author

I see that the input type for the input node is (1, 2, 32, 32). So the batch dim is appearing there. I assume this is not supposed to be there?

@stevenabreu7
Copy link
Collaborator

Yeah, the batch dimension shouldn't be there

@Jegp
Copy link
Collaborator

Jegp commented Oct 18, 2023

Could the batch dimension be related to the batch dimension in the sample data? On line 40 it's torch.rand((1, 2, 34, 34)).

@sheiksadique
Copy link
Collaborator Author

Torch models still expect input with the batch dimension included. It should be ignored only at the conversion stage to NIR.

@Jegp
Copy link
Collaborator

Jegp commented Oct 19, 2023

Is that true? This code works well for me:

import torch
torch.nn.Linear(1, 2)(torch.zeros(1))

@SirineArfa
Copy link

I get the same error when running infer_type() on an NIRGraph generated by snntorch.export_to_nir() for an input of the following size: [1, 150, 2, 32, 32] including the batch size dimension. Is there a current fix for this?

@SirineArfa
Copy link

SirineArfa commented Apr 4, 2024

I get the same error when running infer_type() on an NIRGraph generated by snntorch.export_to_nir() for an input of the following size: [1, 150, 2, 32, 32] including the batch size dimension. Is there a current fix for this?

I found a current fix for this --> Adjust (if necessary and only in case of mismatch) dimensions of padding, dilation, and stride in ir._calculate_conv_output() to match the dimension of input_shape

@stevenabreu7
Copy link
Collaborator

To re-open this, do we agree that all NIR graphs should have the batch dimension removed? Or is this something that should go into the newly added metadata field?

We could also add a helper method to add/remove the batch dimension from a NIR graph (should not be too difficult to simply do something like unsqueeze(0) to all nodes in the graph).

Thoughts? @Jegp @SirineArfa @matjobst (and anyone else)

@Jegp
Copy link
Collaborator

Jegp commented Apr 24, 2024

Happy to put that in the spec. Ideally, the graphs should be independent of batches, IMO. Shouldn't batches be independent of the computation? We're not doing any batch norming in NIR, for instance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants