You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In python3.7 "async" is used by python as a keyword which will result in a syntax error when used as a function parameter. Pytorch cuda semantics change it non_blocking async = True -> non_blocking = True
volatile was removed in pytorch 1.0
IBD-master/util/feature_operation.py:175: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.
input_var = V(input,volatile=True)
Connot convert cuda tensor directly to numpy array
Traceback (most recent call last):
File "test.py", line 20, in
features, _ = fo.feature_extraction(model=model)
File "/data4/lmx/tmp/IBD-master/util/feature_operation.py", line 180, in feature_extraction
while np.isnan(output.data.max()):
File "/data4/lmx/anaconda3/lib/python3.7/site-packages/torch/tensor.py", line 450, in array
return self.numpy()
TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
Modify line 180 in feature_operation.py will fix this issue while np.isnan(output.cpu().data.max()):
The text was updated successfully, but these errors were encountered:
async = True -> non_blocking = True
Modify line 180 in feature_operation.py will fix this issue
while np.isnan(output.cpu().data.max()):
The text was updated successfully, but these errors were encountered: