You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In older versions of torch one could set the default tensor datatype to 32bit floats so that all newly created tensors were by default float tensors. At some point they seem to have changed that. If now the default tensor type is set, it only affects some created tensors and this can even change from one loop iteration to the other. That results in some tensors being created with datatype 64bit float (i.e. double) instead of the required 32bit float, which leads to that error. The solution is to either use a very old version of torch or to manually cast all created tensors to 32bit floats (I think that was done by adding :float() to each command that creates a tensor). The latter way would be the correct one, but currently I don't have the time for that, sorry.
I'm getting a strange error. Do I have a wrong package version of something?
The text was updated successfully, but these errors were encountered: