Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement 'einsum' to overcome intrinsic limitation of 'tensordot' #1224

Open
tdegeus opened this issue Nov 7, 2018 · 15 comments
Open

Implement 'einsum' to overcome intrinsic limitation of 'tensordot' #1224

tdegeus opened this issue Nov 7, 2018 · 15 comments
Labels
Feature Request Help wanted A pull-request is welcomed!

Comments

@tdegeus
Copy link
Member

tdegeus commented Nov 7, 2018

I am trying to evaluate the following partial contraction

c(e,q,i,j) = a(e,q,i,j,k,l) * b(e,q,l,k)

using

#include <xtensor/xtensor.hpp>
#include <xtensor/xio.hpp>
#include <xtensor/xrandom.hpp>
#include <xtensor-blas/xlinalg.hpp>

int main()
{
  xt::xtensor<double,6> a = xt::random::rand<double>({1,4,2,2,2,2});
  xt::xtensor<double,4> b = xt::random::rand<double>({1,4,2,2    });

  auto c = xt::linalg::tensordot(a,b,{5,2},{4,3});
}

But this gives me the error that

libc++abi.dylib: terminating with uncaught exception of type xt::transpose_error: Permutation does not have the same size as shape
@tdegeus
Copy link
Member Author

tdegeus commented Nov 7, 2018

Ah, it seems that what I want is not possible with tensordot (next to the fact that I have misinterpreted the indices which should have been {5,4},{2,3}). This puts emphasis on the need to have an einsum equivalent.

@amjames
Copy link
Contributor

amjames commented Nov 8, 2018

To clarify if the tensordot function is working correctly; When you use the correct axes arguments is the error raised?

The if xtensordot implementation is faithful to np.tensordot then

auto c = xt::linalg::tensordot(a, b {5,4}, {4,3}); 

should not work, It would not perform the partial contraction you wanted, but it should still be valid.

@tdegeus
Copy link
Member Author

tdegeus commented Nov 8, 2018

@amjames

Yes, with the correct indices tensordot act as expected, though not what I intended ;) (with auto c = xt::linalg::tensordot(a,b,{5,4},{2,3});).

For me to do the partial contraction (that I will probably do frequently) I want to consider some einsum equivalent. Earlier you said you had ideas for this?

@amjames
Copy link
Contributor

amjames commented Nov 9, 2018

I do have ideas for xtensor einsum. I think that there are very interesting possibilities for doing partial path optimization at compile time, though that would require a divergence from the numpy syntax or some way to parse the string arguments at compile time. Unfortunately I haven't had enough time to to work on even an initial implementation, but it is something I would very much like to do.

@wolfv
Copy link
Member

wolfv commented Nov 9, 2018

I think one idea could be to pass the index arguments as template parameters, such as:

index<0> I; index<1> J; 
einsum<(I, J), I>(A)... 

I am no expert in einsum's, though.

We've also collected a couple of ideas in this issue: #561

@tdegeus
Copy link
Member Author

tdegeus commented Nov 9, 2018

I wouldn't mind helping where I can to get einsum in xtensor. Having einsum would simplify my and my student's lives so much, and make several bits of code so much more transparent.

I would strongly suggest a syntax that is also feasible as a one-liner.

@tdegeus tdegeus changed the title tensordot: permutation error Implement 'einsum' to overcome intrinsic limitation of 'tensordot' Nov 12, 2018
@faysou
Copy link

faysou commented Jul 1, 2019

Why not try to integrate xtensor with taco for this, as they seem to be the state of the art for tensor contraction ? https://github.com/tensor-compiler/taco
I guess the difficulty will be in switching between data structures without creating a copy.

I discovered the existence of einsum recently for a prototype in python and it's super powerful. Xtensor seems to be the best library for a python like syntax in c++ and an einsum like feature is the last thing missing.

Here's a reference on einsum:
https://rockt.github.io/2018/04/30/einsum

@tdegeus
Copy link
Member Author

tdegeus commented Jul 2, 2019

@faysou Indeed, einsum would be a great extension. I will try to invest time in the near future.

Personally I'm not in favour of having a wrapper to an external library, in particular for such a core function. But we could surely try to learn from taco.

@SylvainCorlay @wolfv @JohanMabille

@cloudhan
Copy link

cloudhan commented Jul 5, 2019

@tdegeus https://github.com/romeric/Fastor would be a good reference.

@JohanMabille
Copy link
Member

JohanMabille commented Jul 5, 2019

I think having a wrapper to an external library in another xtensor-xxxx repo is fine as a first implementation, however I agree with @tdegeus, on the long run such a core feature should be implemented in xtensor itself, without any external dependency.

Regarding the implementation, the idea would be to generalize the GOTO matrix product in N dimensions (and that is not trivial).

Eigen provides a nice implementation of this algorithm, and having a look at Taco and Fastor might be helpful to generalize it. Thanks @faysou and @cloudhan for pointing them out, they look really great!

@jlucier
Copy link

jlucier commented Oct 20, 2020

Any movement on this recently? I for one would love to see an einsum of one form or another in xtensor! Or is there a recommended workaround these days (beyond, say, iteratively applying tensordot)?

@JohanMabille
Copy link
Member

Unfortunately no, we didn't have the time to tackle this, I hope we can do it soon.

@tdegeus tdegeus added Feature Request Help wanted A pull-request is welcomed! labels Apr 19, 2023
@snehalv2002
Copy link

snehalv2002 commented Jun 14, 2023

Going to start working on this. Hoping to submit a PR in around 2.5 weeks?

@snehalv2002
Copy link

Just wanted to let people know I haven't forgotten about this. It's a bit of a side thing for me so its taking longer than expected but I currently have a basic implementation working for 2 input tensors. Still need to add functionality for only one input tensor but this should be relatively easy at this point. Once i finish this I'll submit a PR for review and then I plan to start looking at how other libraries optimized einsum and moving those techniques here. If anyone has any ideas for this let me know as I've looked into it a bit but any advice would certainly help.

@JohanMabille
Copy link
Member

@snehalv2002 thanks for the update and no worries, this is a complicated feature to implement

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature Request Help wanted A pull-request is welcomed!
Projects
None yet
Development

No branches or pull requests

8 participants