Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support to py3.7 and pytorch > 1.0 #2

Open
drmeerkat opened this issue Mar 26, 2019 · 0 comments
Open

Add support to py3.7 and pytorch > 1.0 #2

drmeerkat opened this issue Mar 26, 2019 · 0 comments

Comments

@drmeerkat
Copy link

  1. In python3.7 "async" is used by python as a keyword which will result in a syntax error when used as a function parameter. Pytorch cuda semantics change it non_blocking
    async = True -> non_blocking = True
  2. volatile was removed in pytorch 1.0

IBD-master/util/feature_operation.py:175: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.
input_var = V(input,volatile=True)

  1. Connot convert cuda tensor directly to numpy array

Traceback (most recent call last):
File "test.py", line 20, in
features, _ = fo.feature_extraction(model=model)
File "/data4/lmx/tmp/IBD-master/util/feature_operation.py", line 180, in feature_extraction
while np.isnan(output.data.max()):
File "/data4/lmx/anaconda3/lib/python3.7/site-packages/torch/tensor.py", line 450, in array
return self.numpy()
TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

Modify line 180 in feature_operation.py will fix this issue
while np.isnan(output.cpu().data.max()):

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant