Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cannot convert 'struct THDoubleTensor *' to 'struct THFloatTensor * #4

Open
adam-hanna opened this issue Aug 23, 2018 · 2 comments
Open

Comments

@adam-hanna
Copy link

adam-hanna commented Aug 23, 2018

I'm getting a strange error. Do I have a wrong package version of something?

$ th train.lua
...
<dataset> Loaded 264660 filepaths
<dataset> loaded 1000 random examples
POST    /events
POST    /events
POST    /events
POST    /events
<trainer> Epoch #1 [batchSize = 32]
/root/torch/install/bin/luajit: /root/torch/install/share/lua/5.1/nn/THNN.lua:110: bad argument #3 to 'v' (cannot convert 'struct THDoubleTensor *' to 'struct THFloatTensor *')
stack traceback:
        [C]: in function 'v'
        /root/torch/install/share/lua/5.1/nn/THNN.lua:110: in function 'BCECriterion_updateOutput'
        /root/torch/install/share/lua/5.1/nn/BCECriterion.lua:33: in function 'forward'
        ./adversarial.lua:96: in function 'opfunc'
        ./interruptable_optimizers.lua:60: in function 'interruptableAdam'
        ./adversarial.lua:264: in function 'train'
        train.lua:207: in function 'main'
        train.lua:212: in main chunk
        [C]: in function 'dofile'
        /root/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
        [C]: at 0x00405d50
@aleju
Copy link
Owner

aleju commented Aug 24, 2018

In older versions of torch one could set the default tensor datatype to 32bit floats so that all newly created tensors were by default float tensors. At some point they seem to have changed that. If now the default tensor type is set, it only affects some created tensors and this can even change from one loop iteration to the other. That results in some tensors being created with datatype 64bit float (i.e. double) instead of the required 32bit float, which leads to that error. The solution is to either use a very old version of torch or to manually cast all created tensors to 32bit floats (I think that was done by adding :float() to each command that creates a tensor). The latter way would be the correct one, but currently I don't have the time for that, sorry.

@adam-hanna
Copy link
Author

Cool, no worries! I'll see if I can't take a shot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants