Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fatal error: c10/cuda/impl/cuda_cmake_macros.h: No such file or directory #202

Open
Nadir-Echo opened this issue Sep 14, 2021 · 5 comments

Comments

@Nadir-Echo
Copy link

I tried to run pointRCNN, but encountered some problems in the process, I would like to ask

Environment: ubuntu16.04LTS desktop

cuda10.0

cudnn7.1.3

pytorch1.1

anaconda

python3.6

I ran this pointRCNN in a virtual environment created by anaconda. The cuda environment is set up. The final result of entering nvcc -V or compiling NVIDIA_CADA-9.0_Sampless is also PASS, but it appears in sh built_and_install.sh fatal error: c10/cuda/impl/cuda_cmake_macros.h. Can not compile successfully, I don't know if you have any ideas.

I am now wondering if anaconda itself comes with a gcc (7.5.0) compiler version that is too high, but my gcc compiler of ubuntu itself is 5.4.0, and I am running in the virtual environment created by anaconda Is its built-in compiler

#Anaconda's own environment
(base) wg@wg-MS-7B98:~/lyh/PointRCNN$ python
Python 3.8.8 (default, Apr 13 2021, 19:58:26) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> exit()

#Run pointRCNN
(base) wg@wg-MS-7B98:~/lyh/PointRCNN$ conda activate pointRCNN
(pointRCNN) wg@wg-MS-7B98:~/lyh/PointRCNN$ sh build_and_install.sh
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
running install
running bdist_egg
running egg_info
writing pointnet2.egg-info/PKG-INFO
writing dependency_links to pointnet2.egg-info/dependency_links.txt
writing top-level names to pointnet2.egg-info/top_level.txt
reading manifest file 'pointnet2.egg-info/SOURCES.txt'
writing manifest file 'pointnet2.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'pointnet2_cuda' extension
gcc -pthread -B /home/wg/anaconda3/envs/pointRCNN/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include/TH -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/wg/anaconda3/envs/pointRCNN/include/python3.6m -c src/pointnet2_api.cpp -o build/temp.linux-x86_64-3.6/src/pointnet2_api.o -g -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=pointnet2_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include/c10/cuda/CUDAStream.h:9:0,
                 from /home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include/ATen/cuda/CUDAContext.h:5,
                 from src/sampling_gpu.h:5,
                 from src/pointnet2_api.cpp:6:
/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include/c10/cuda/CUDAMacros.h:4:45: fatal error: c10/cuda/impl/cuda_cmake_macros.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
running install
running bdist_egg
running egg_info
writing iou3d.egg-info/PKG-INFO
writing dependency_links to iou3d.egg-info/dependency_links.txt
writing top-level names to iou3d.egg-info/top_level.txt
reading manifest file 'iou3d.egg-info/SOURCES.txt'
writing manifest file 'iou3d.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'iou3d_cuda' extension
gcc -pthread -B /home/wg/anaconda3/envs/pointRCNN/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include/TH -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/wg/anaconda3/envs/pointRCNN/include/python3.6m -c src/iou3d.cpp -o build/temp.linux-x86_64-3.6/src/iou3d.o -g -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=iou3d_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/usr/local/cuda/bin/nvcc -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include/TH -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/wg/anaconda3/envs/pointRCNN/include/python3.6m -c src/iou3d_kernel.cu -o build/temp.linux-x86_64-3.6/src/iou3d_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -O2 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=iou3d_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
g++ -pthread -shared -B /home/wg/anaconda3/envs/pointRCNN/compiler_compat -L/home/wg/anaconda3/envs/pointRCNN/lib -Wl,-rpath=/home/wg/anaconda3/envs/pointRCNN/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.6/src/iou3d.o build/temp.linux-x86_64-3.6/src/iou3d_kernel.o -L/usr/local/cuda/lib64 -lcudart -o build/lib.linux-x86_64-3.6/iou3d_cuda.cpython-36m-x86_64-linux-gnu.so
/home/wg/anaconda3/envs/pointRCNN/compiler_compat/ld: cannot find crti.o: No such file or directory
/home/wg/anaconda3/envs/pointRCNN/compiler_compat/ld: cannot find -lm
/home/wg/anaconda3/envs/pointRCNN/compiler_compat/ld: cannot find -lpthread
/home/wg/anaconda3/envs/pointRCNN/compiler_compat/ld: cannot find -lc
/home/wg/anaconda3/envs/pointRCNN/compiler_compat/ld: cannot find crtn.o: No such file or directory
collect2: error: ld returned 1 exit status
error: command 'g++' failed with exit status 1
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
running install
running bdist_egg
running egg_info
writing roipool3d.egg-info/PKG-INFO
writing dependency_links to roipool3d.egg-info/dependency_links.txt
writing top-level names to roipool3d.egg-info/top_level.txt
reading manifest file 'roipool3d.egg-info/SOURCES.txt'
writing manifest file 'roipool3d.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'roipool3d_cuda' extension
gcc -pthread -B /home/wg/anaconda3/envs/pointRCNN/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include/TH -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/wg/anaconda3/envs/pointRCNN/include/python3.6m -c src/roipool3d.cpp -o build/temp.linux-x86_64-3.6/src/roipool3d.o -g -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=roipool3d_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/usr/local/cuda/bin/nvcc -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include/TH -I/home/wg/anaconda3/envs/pointRCNN/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/wg/anaconda3/envs/pointRCNN/include/python3.6m -c src/roipool3d_kernel.cu -o build/temp.linux-x86_64-3.6/src/roipool3d_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -O2 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=roipool3d_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
g++ -pthread -shared -B /home/wg/anaconda3/envs/pointRCNN/compiler_compat -L/home/wg/anaconda3/envs/pointRCNN/lib -Wl,-rpath=/home/wg/anaconda3/envs/pointRCNN/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.6/src/roipool3d.o build/temp.linux-x86_64-3.6/src/roipool3d_kernel.o -L/usr/local/cuda/lib64 -lcudart -o build/lib.linux-x86_64-3.6/roipool3d_cuda.cpython-36m-x86_64-linux-gnu.so
/home/wg/anaconda3/envs/pointRCNN/compiler_compat/ld: cannot find crti.o: No such file or directory
/home/wg/anaconda3/envs/pointRCNN/compiler_compat/ld: cannot find -lm
/home/wg/anaconda3/envs/pointRCNN/compiler_compat/ld: cannot find -lpthread
/home/wg/anaconda3/envs/pointRCNN/compiler_compat/ld: cannot find -lc
/home/wg/anaconda3/envs/pointRCNN/compiler_compat/ld: cannot find crtn.o: No such file or directory
collect2: error: ld returned 1 exit status
error: command 'g++' failed with exit status 1

This is the environment I configured for cuda on etc/profile

PATH="$HOME/bin:$HOME/.local/bin:$PATH"
export CUDA_HOME=/usr/local/cuda-9.0
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-9.0/lib64
export CUDA_ROOT=/usr/local/cuda-9.0
export PATH=$CUDA_ROOT:$CUDA_ROOT/bin:/usr/local/bin:$PATH
@Nadir-Echo
Copy link
Author

(base) wg@wg-MS-7B98:~$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:03_CDT_2017
Cuda compilation tools, release 9.0, V9.0.176

@RonaldYuren
Copy link

RonaldYuren commented Jan 27, 2022

I have encountered the similar errors. Did you solve your problem?

@Nadir-Echo
Copy link
Author

I have encountered the similar errors. Did you solve your problem?

Yes, I found my problem. When I installed pytorch, I changed the mirror source, and the source version installed the pytorch-CPU version by default. I switched to the GPU version and it worked successfully
you can use

import torch
flag = torch.cuda.is_available()
print(flag)

to detect if pytorch can be called

Otherwise, the error may also be
cuda version
Environment variable settings
gcc version problem
It is recommended to check the above settings
hope it is of help to you

@RonaldYuren
Copy link

although I have installed the pytorch (torch 1.10.1 ;torchfile 0.1.0 ;torchvision 0.11.2 ), but when I used I got the below:

python3
import torch

flag = torch.cuda.is_available()
print(flag)
False

also I have several errors like below when I am running " bash build_and_install.sh" :

/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
return _and<is_convertible<_UElements&&, _Elements>...>::value;
^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, double, long int>}; bool = true; _Elements = {at::Tensor, at::Tensor, double, long int}]’ not a return-statement
}
^

Maybe the path I haven't set ?
Can you please give an example(with explanation) to tell me how to check ? I am new to the DL-field.

@Nadir-Echo
Copy link
Author

although I have installed the pytorch (torch 1.10.1 ;torchfile 0.1.0 ;torchvision 0.11.2 ), but when I used I got the below:

python3 import torch

flag = torch.cuda.is_available()
print(flag)
False

also I have several errors like below when I am running " bash build_and_install.sh" :

/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’ return _and<is_convertible<_UElements&&, _Elements>...>::value; ^~~~~ /usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, double, long int>}; bool = true; _Elements = {at::Tensor, at::Tensor, double, long int}]’ not a return-statement } ^

Maybe the path I haven't set ? Can you please give an example(with explanation) to tell me how to check ? I am new to the DL-field.

  1. https://stackoverflow.com/questions/60987997/why-torch-cuda-is-available-returns-false-even-after-installing-pytorch-with
  2. https://www.cnblogs.com/obarong/p/14833845.html
  3. https://zhuanlan.zhihu.com/p/344785624
    You can refer to the examples of the above three articles, there are corresponding instructions, I used him to output False before, and then I thought that it might be the problem of the pytorch version, uninstall it directly, and download and install the pytorch corresponding to the GPU version on the official website. , without taking the path of the mirror source, and then it was successful. You can check your own code environment according to the examples in the article and exclude them one by one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants