-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarification for discussion of shared memory in torch.from_numpy(ndarray)
and torch.Tensor.numpy()
#926
Comments
torch.from_numpy(ndarray)
and torch.Tensor.numpy()
torch.from_numpy(ndarray)
and torch.Tensor.numpy()
Hello. Source: """ Therefore, if you pass int64 array to torch.Tensor, output tensor is float tensor and they wouldn't share the storage. torch.from_numpy gives you torch.LongTensor as expected. a = np.arange(10)
ft = torch.Tensor(a) # same as torch.FloatTensor
it = torch.from_numpy(a)
a.dtype # == dtype('int64')
ft.dtype # == torch.float32
it.dtype # == torch.int64 answered Sep 18, 2018 by Viacheslav Kroilov """ |
Just found the corrections page for the video here where this point has already been acknowledged: #98 But leaving this open for now as the book description (code lines 65-67 https://www.learnpytorch.io/00_pytorch_fundamentals/#pytorch-tensors-numpy) is still a little misleading. |
Thanks for pointing that out. |
I tried the code, and it works as you describe. Hope you understand it, if not feel free to ask. |
Thank you for this! I've noted this down and will work on a clearer explanation + fix in the notebooks shortly. |
Thank you for this wonderful resource!
I have noticed that in the "PyTorch and Numpy" video: https://youtu.be/Z_ikDlimN6A?si=fufjAATXrinMXGtu&t=13085 as well as in the online book: https://www.learnpytorch.io/00_pytorch_fundamentals/#pytorch-tensors-numpy
The provided explanations for
torch.from_numpy(ndarray)
andtorch.Tensor.numpy()
suggest that the inputs and outputs of these functions are not linked in memory.The explanation provided is based on the example that the code below:
Will lead to
array
andtensor
having different values, and that lack of a connection occurs at the moment of callingtensor = torch.from_numpy(array)
.However it's my understanding that the actual moment of these two variables pointing to different memory addresses is during the addition line,
array = array + 1
. You can see that botharray
andtensor
are still tied together before this addition in this example:This is topic is clarified better here: https://stackoverflow.com/questions/61526297/pytorch-memory-model-how-does-torch-from-numpy-work
Sorry if I am misunderstanding and thanks again for making all this wonderful content!
The text was updated successfully, but these errors were encountered: