You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think the first one is a reasonable extension of the syntax. Are there any other possible interpretations of the i->ii syntax, like being equivalent to input[:, None] * input[None, :]? What would be the result of einsum('ijk->iik', x)?
I think the second one can be worked around easily with an unsqueeze on the return value of einsum. I think most people would consider it an error to use a variable on the RHS that isn't declared on the LHS, but I suppose it's a matter of taste.
Although the original einsum doesn't allow it, it would be convenient if:
einsum('i->ii', torch.tensor([1., 2.]))
returnstorch.tensor([[1., 0.], [0., 2.]])
(that is, it's liketorch.diag_embed
)einsum('->i', torch.tensor(1.)
returnstorch.tensor([1.])
(that is, it's liketorch.unsqueeze
)I can imagine that the second proposal is more controversial than the first one.
The text was updated successfully, but these errors were encountered: