-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cupy backpropagation error #50
Comments
Can you provide a minimal reproducible example? |
I wrote a sample code that will cause this problem. import torch
import torch.optim as optim
import torchsparsegradutils.cupy.cupy_sparse_solve as cupy_solve
A = torch.randn(12, 12, requires_grad=True)
b = torch.randn(12, requires_grad=True)
target = torch.randn(12, 1, requires_grad=True)
learning_rate = 0.05
optimizer = optim.SGD([A, b], lr=learning_rate)
for i in range(100):
x = torch.unsqueeze(cupy_solve.sparse_solve_c4t(A.to_sparse(), b), 1)
loss = torch.mean(target - x * 10)
optimizer.zero_grad()
loss.backward()
optimizer.step() |
The issue is that we currently expect b to be a matrix in the backward pass. This is not cupy related. You can workaround the issue by making |
Thank you very much for your prompt answer, but I changed the code to the one below and the same problem still occurs. import torch
import torchsparsegradutils.cupy.cupy_sparse_solve as cupy_solve
A = torch.randn(12, 12, requires_grad=True).to_sparse()
b = torch.randn(12, 1, requires_grad=True)
x = cupy_solve.sparse_solve_c4t(A, b)
loss = x.sum()
loss.backward() |
Indeed, there was another similar issue in the cupy solver. This should now be fixed albeit with missing unit tests: Completion of unit tests will be tracked in #51 |
I encountered this problem when using cupy for backpropagation. I don’t know what’s going on? There is no problem in the forward direction, but in the reverse direction, there is a dimension mismatch.
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
code location:
mgbx = mgradbselect * xselect
gradA = torch.sum(mgbx, dim=1)
The text was updated successfully, but these errors were encountered: