Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow repeated and unmatched output indices? #9

Open
davidweichiang opened this issue Jan 3, 2022 · 2 comments
Open

Allow repeated and unmatched output indices? #9

davidweichiang opened this issue Jan 3, 2022 · 2 comments
Labels
enhancement New feature or request

Comments

@davidweichiang
Copy link
Contributor

Although the original einsum doesn't allow it, it would be convenient if:

  • einsum('i->ii', torch.tensor([1., 2.])) returns torch.tensor([[1., 0.], [0., 2.]]) (that is, it's like torch.diag_embed)
  • einsum('->i', torch.tensor(1.) returns torch.tensor([1.]) (that is, it's like torch.unsqueeze)

I can imagine that the second proposal is more controversial than the first one.

@bdusell
Copy link
Owner

bdusell commented Jan 5, 2022

I think the first one is a reasonable extension of the syntax. Are there any other possible interpretations of the i->ii syntax, like being equivalent to input[:, None] * input[None, :]? What would be the result of einsum('ijk->iik', x)?

I think the second one can be worked around easily with an unsqueeze on the return value of einsum. I think most people would consider it an error to use a variable on the RHS that isn't declared on the LHS, but I suppose it's a matter of taste.

@davidweichiang
Copy link
Contributor Author

Hm, I suppose an argument against i->ii is that it's not the inverse of ii->i (trace).

@bdusell bdusell added the enhancement New feature or request label Jun 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants