-
Notifications
You must be signed in to change notification settings - Fork 402
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement 'einsum' to overcome intrinsic limitation of 'tensordot' #1224
Comments
Ah, it seems that what I want is not possible with |
To clarify if the tensordot function is working correctly; When you use the correct axes arguments is the error raised? The if auto c = xt::linalg::tensordot(a, b {5,4}, {4,3}); should not work, It would not perform the partial contraction you wanted, but it should still be valid. |
Yes, with the correct indices For me to do the partial contraction (that I will probably do frequently) I want to consider some |
I do have ideas for xtensor einsum. I think that there are very interesting possibilities for doing partial path optimization at compile time, though that would require a divergence from the numpy syntax or some way to parse the string arguments at compile time. Unfortunately I haven't had enough time to to work on even an initial implementation, but it is something I would very much like to do. |
I think one idea could be to pass the index arguments as template parameters, such as:
I am no expert in einsum's, though. We've also collected a couple of ideas in this issue: #561 |
I wouldn't mind helping where I can to get I would strongly suggest a syntax that is also feasible as a one-liner. |
Why not try to integrate xtensor with taco for this, as they seem to be the state of the art for tensor contraction ? https://github.com/tensor-compiler/taco I discovered the existence of einsum recently for a prototype in python and it's super powerful. Xtensor seems to be the best library for a python like syntax in c++ and an einsum like feature is the last thing missing. Here's a reference on einsum: |
@faysou Indeed, einsum would be a great extension. I will try to invest time in the near future. Personally I'm not in favour of having a wrapper to an external library, in particular for such a core function. But we could surely try to learn from taco. |
@tdegeus https://github.com/romeric/Fastor would be a good reference. |
I think having a wrapper to an external library in another xtensor-xxxx repo is fine as a first implementation, however I agree with @tdegeus, on the long run such a core feature should be implemented in Regarding the implementation, the idea would be to generalize the GOTO matrix product in N dimensions (and that is not trivial). Eigen provides a nice implementation of this algorithm, and having a look at Taco and Fastor might be helpful to generalize it. Thanks @faysou and @cloudhan for pointing them out, they look really great! |
Any movement on this recently? I for one would love to see an |
Unfortunately no, we didn't have the time to tackle this, I hope we can do it soon. |
Going to start working on this. Hoping to submit a PR in around 2.5 weeks? |
Just wanted to let people know I haven't forgotten about this. It's a bit of a side thing for me so its taking longer than expected but I currently have a basic implementation working for 2 input tensors. Still need to add functionality for only one input tensor but this should be relatively easy at this point. Once i finish this I'll submit a PR for review and then I plan to start looking at how other libraries optimized einsum and moving those techniques here. If anyone has any ideas for this let me know as I've looked into it a bit but any advice would certainly help. |
@snehalv2002 thanks for the update and no worries, this is a complicated feature to implement |
I am trying to evaluate the following partial contraction
using
But this gives me the error that
libc++abi.dylib: terminating with uncaught exception of type xt::transpose_error: Permutation does not have the same size as shape
The text was updated successfully, but these errors were encountered: