-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Add "axes" option in tensor contract #74
Comments
Two remarks:
ts_ctrct = tensorcontract(ts1, (1,2,3,4), ts2, (4,5,6,7)) or @tensor ts_ctrct[1,2,3,5,6,7] := ts1[1,2,3,4]*ts2[4,5,6,7]
|
I did some test myself:
versus
Could you provide more details with respect to your conclusion that TensorOperations is way slower? |
Sorry for the late reply, I indeed forgot about the compilation time... I also benchmarked on my computer using your code and got similar results. But I still want to ask if there is a neat way of using |
I could add a |
Besides, I mention that the |
I did some test for bigger tensors (RAM: 16GB): Julia 1.0.5:
Python 3.7.3 with NumPy 1.17.2
So My NumPy configuration:
|
Yes, contracting say the last axis of tensor 1 with the first axis of tensor 2 is exactly one of the tensor contractions that can directly be mapped to a matrix multiplication without any additional permutations or reshuffling of the data in memory. So all the runtime is essentially in the matrix multiplication. I would expect the timings to be even more similar. Anyway, for large tensors most of the runtime should anyway be in the matrix multiplication, even if reshuffling is involved, and as such, timings should never differ by huge amounts. |
Hi, Just chiming in to say that I would also love to see I'd also be happy to create a pull request for |
In principle I am not opposed to other methods, I hardly ever use the method syntax. However, they do have to use Julia terminology, and not be just plain copies of Numpy functions. So I guess |
Is this related to why this fails? # naive inner product for nd tensors
A = rand(1,2,3,4,5)
B = rand(1,2,3,4,5)
@tensor C[a,b,c,d] := A[a,b,c,d,e] * conj(B[a,b,c,d,e]) Or do I just have a logical error? It represents the self-attention score calculation in a transformer, in case that helps anyone understand |
I believe the rule for this package is that every index must appear twice, either both on the right, or left & right. (When there is one term.) It doesn't handle things like batched matrix multiplication, nor this, which I suppose you could call a batched dot product. |
Okay, yeah, I was speaking gibberish haha I think I get it now Thank you |
Update:
TensorOperations
does not have big advantage overNumPy
for large tensors.In
NumPy
, we have a handy feature innp.tensordot
, as illustrated below:The
axes
option can quickly tell the program which indices should be contracted. However, the equivalent functiontensorcontract
is hard to use when the number of indices is large. For example,I have to manually write down (1,2,3,4...). Thus I would like to suggest to use the
axes
option to replace the index currently widely used inTensorOperations
.The text was updated successfully, but these errors were encountered: