Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help with installing Dynet - GPU #3

Closed
vahini01 opened this issue May 11, 2022 · 3 comments
Closed

Help with installing Dynet - GPU #3

vahini01 opened this issue May 11, 2022 · 3 comments

Comments

@vahini01
Copy link

Hi,
I am facing some issues while installing Dynet-GPU for CUDA 11.1. Can you please inform, whether you have used the CPU version of dynet or GPU ? If GPU, then can you inform the version of dynet and eigen that you have used?

System Specifications:
Cuda Version - 11.1

I have tried the following versions.

Build Command:
To avoid Unsupported GPU architecture for compute_30 during build time, the below command is used.

cmake .. -DEIGEN3_INCLUDE_DIR=../eigen -DPYTHON='which python' -DBACKEND=cuda -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-11.1

Dynet Eigen Error
Latest(master branch) Eigen 3.2 CX11 folder is not present
Latest(master branch) Eigen 3.3 identifier std::round is undefined in device code
Latest(master branch) Eigen 3.3.7 identifier std::round is undefined in device code
Latest(master branch) Eigen 3.4 Error while running make
Dynet 2.0.3 Eigen-2355b22 Unsupported GPU architecture for compute_30 (while running make)
Dynet 2.1 Eigen-b2e267dc99d4.zip Unsupported GPU architecture for compute_30 (while running make)
@vahini01 vahini01 changed the title Help in installing Dynet - GPU Help with installing Dynet - GPU May 11, 2022
@vahini01
Copy link
Author

This is resolved. Refer clab/dynet#1652

@g-karthik
Copy link

@vahini01 @shrutirij I was able to resolve the CUDA 11 installation issue by following the instructions you shared, but I am now getting a run-time error that CUDA couldn't allocate any memory upfront at all.

Stack-trace referenced here.

Running training on CPU is taking over 18 hours for 1 epoch and I'd really like to be able to use my GPUs! If you could share any insights, that would be much appreciated!

@g-karthik
Copy link

Just a weird observation in case someone lands on this page - I asked a friend to try these steps with an instance containing K80 GPUs (I was using an instance with Tesla V100s) and it worked out for them, no run-time errors. CUDA version is the same (11.1), python version they were using was 3.7 while I was using 3.8.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants