Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error: identifier "__hmax" is undefined #462

Open
MrHandsomeljn opened this issue Aug 8, 2024 · 1 comment
Open

error: identifier "__hmax" is undefined #462

MrHandsomeljn opened this issue Aug 8, 2024 · 1 comment

Comments

@MrHandsomeljn
Copy link

MrHandsomeljn commented Aug 8, 2024

Hello, I have a lots of compiling errors
First of all, my system is

I found a error while install with "pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch".
The compiling begins well like:
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch Cloning https://github.com/NVlabs/tiny-cuda-nn/ to /tmp/pip-req-build-10mg9n7j Running command git clone --filter=blob:none --quiet https://github.com/NVlabs/tiny-cuda-nn/ /tmp/pip-req-build-10mg9n7j Resolved https://github.com/NVlabs/tiny-cuda-nn/ to commit daf9628c8d4500aee29935e15cff58a359277ad0 Running command git submodule update --init --recursive -q Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Building wheels for collected packages: tinycudann Building wheel for tinycudann (setup.py): started

Then errors:

` × python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [2146 lines of output]
/tmp/pip-req-build-10mg9n7j/bindings/torch/setup.py:5: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
from pkg_resources import parse_version
Building PyTorch extension for tiny-cuda-nn version 1.7
Obtained compute capabilities [86] from environment variable TCNN_CUDA_ARCHITECTURES
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Wed_Jun__9_20:32:38_PDT_2021
Cuda compilation tools, release 11.3, V11.3.122
Build cuda_11.3.r11.3/compiler.30059648_0
Detected CUDA version 11.3
Targeting C++ standard 17
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-310
creating build/lib.linux-x86_64-cpython-310/tinycudann
copying tinycudann/init.py -> build/lib.linux-x86_64-cpython-310/tinycudann
copying tinycudann/modules.py -> build/lib.linux-x86_64-cpython-310/tinycudann
running egg_info
creating tinycudann.egg-info
writing tinycudann.egg-info/PKG-INFO
writing dependency_links to tinycudann.egg-info/dependency_links.txt
writing top-level names to tinycudann.egg-info/top_level.txt
writing manifest file 'tinycudann.egg-info/SOURCES.txt'
reading manifest file 'tinycudann.egg-info/SOURCES.txt'
writing manifest file 'tinycudann.egg-info/SOURCES.txt'
copying tinycudann/bindings.cpp -> build/lib.linux-x86_64-cpython-310/tinycudann
running build_ext
building 'tinycudann_bindings.86_C' extension
creating /tmp/pip-req-build-10mg9n7j/bindings/torch/dependencies
creating /tmp/pip-req-build-10mg9n7j/bindings/torch/dependencies/fmt
creating /tmp/pip-req-build-10mg9n7j/bindings/torch/dependencies/fmt/src
creating /tmp/pip-req-build-10mg9n7j/bindings/torch/src
creating /tmp/pip-req-build-10mg9n7j/bindings/torch/build/temp.linux-x86_64-cpython-310
creating /tmp/pip-req-build-10mg9n7j/bindings/torch/build/temp.linux-x86_64-cpython-310/tinycudann
Emitting ninja build file /tmp/pip-req-build-10mg9n7j/bindings/torch/build/temp.linux-x86_64-cpython-310/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/10] c++ -MMD -MF /tmp/pip-req-build-10mg9n7j/bindings/torch/dependencies/fmt/src/os.o.d -pthread -B /home/ljn/data/miniconda3/envs/CusNeRF/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/ljn/data/miniconda3/envs/CusNeRF/include -fPIC -O2 -isystem /home/ljn/data/miniconda3/envs/CusNeRF/include -fPIC -I/tmp/pip-req-build-10mg9n7j/include -I/tmp/pip-req-build-10mg9n7j/dependencies -I/tmp/pip-req-build-10mg9n7j/dependencies/cutlass/include -I/tmp/pip-req-build-10mg9n7j/dependencies/cutlass/tools/util/include -I/tmp/pip-req-build-10mg9n7j/dependencies/fmt/include -I/home/ljn/data/miniconda3/envs/CusNeRF/lib/python3.10/site-packages/torch/include -I/home/ljn/data/miniconda3/envs/CusNeRF/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/ljn/data/miniconda3/envs/CusNeRF/lib/python3.10/site-packages/torch/include/TH -I/home/ljn/data/miniconda3/envs/CusNeRF/lib/python3.10/site-packages/torch/include/THC -I/home/ljn/data/miniconda3/envs/CusNeRF/include -I/home/ljn/data/miniconda3/envs/CusNeRF/include/python3.10 -c -c /tmp/pip-req-build-10mg9n7j/dependencies/fmt/src/os.cc -o /tmp/pip-req-build-10mg9n7j/bindings/torch/dependencies/fmt/src/os.o -std=c++17 -DTCNN_PARAMS_UNALIGNED -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=86_C -D_GLIBCXX_USE_CXX11_ABI=0
[2/10] /home/ljn/data/miniconda3/envs/CusNeRF/bin/nvcc -I/tmp/pip-req-build-10mg9n7j/include -I/tmp/pip-req-build-10mg9n7j/dependencies -I/tmp/pip-req-build-10mg9n7j/dependencies/cutlass/include -I/tmp/pip-req-build-10mg9n7j/dependencies/cutlass/tools/util/include -I/tmp/pip-req-build-10mg9n7j/dependencies/fmt/include -I/home/ljn/data/miniconda3/envs/CusNeRF/lib/python3.10/site-packages/torch/include -I/home/ljn/data/miniconda3/envs/CusNeRF/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/ljn/data/miniconda3/envs/CusNeRF/lib/python3.10/site-packages/torch/include/TH -I/home/ljn/data/miniconda3/envs/CusNeRF/lib/python3.10/site-packages/torch/include/THC -I/home/ljn/data/miniconda3/envs/CusNeRF/include -I/home/ljn/data/miniconda3/envs/CusNeRF/include/python3.10 -c -c /tmp/pip-req-build-10mg9n7j/src/object.cu -o /tmp/pip-req-build-10mg9n7j/bindings/torch/src/object.o -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
-D__CUDA_NO_BFLOAT16_CONVERSIONS
_ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -std=c++17 --extended-lambda --expt-relaxed-constexpr -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -Xcompiler=-Wno-float-conversion -Xcompiler=-fno-strict-aliasing -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -DTCNN_PARAMS_UNALIGNED -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=86_C -D_GLIBCXX_USE_CXX11_ABI=0
FAILED: /tmp/pip-req-build-10mg9n7j/bindings/torch/src/object.o
/home/ljn/data/miniconda3/envs/CusNeRF/bin/nvcc -I/tmp/pip-req-build-10mg9n7j/include -I/tmp/pip-req-build-10mg9n7j/dependencies -I/tmp/pip-req-build-10mg9n7j/dependencies/cutlass/include -I/tmp/pip-req-build-10mg9n7j/dependencies/cutlass/tools/util/include -I/tmp/pip-req-build-10mg9n7j/dependencies/fmt/include -I/home/ljn/data/miniconda3/envs/CusNeRF/lib/python3.10/site-packages/torch/include -I/home/ljn/data/miniconda3/envs/CusNeRF/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/ljn/data/miniconda3/envs/CusNeRF/lib/python3.10/site-packages/torch/include/TH -I/home/ljn/data/miniconda3/envs/CusNeRF/lib/python3.10/site-packages/torch/include/THC -I/home/ljn/data/miniconda3/envs/CusNeRF/include -I/home/ljn/data/miniconda3/envs/CusNeRF/include/python3.10 -c -c /tmp/pip-req-build-10mg9n7j/src/object.cu -o /tmp/pip-req-build-10mg9n7j/bindings/torch/src/object.o -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
-D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -std=c++17 --extended-lambda --expt-relaxed-constexpr -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -Xcompiler=-Wno-float-conversion -Xcompiler=-fno-strict-aliasing -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -DTCNN_PARAMS_UNALIGNED -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_86_C -D_GLIBCXX_USE_CXX11_ABI=0
/tmp/pip-req-build-10mg9n7j/include/tiny-cuda-nn/common_device.h(94): error: identifier "__hmax" is undefined

  /tmp/pip-req-build-10mg9n7j/dependencies/fmt/include/fmt/core.h(288): warning: unrecognized GCC pragma
  
  /tmp/pip-req-build-10mg9n7j/include/tiny-cuda-nn/cuda_graph.h(137): error: identifier "cudaGraphExecUpdateResult" is undefined
  
  /tmp/pip-req-build-10mg9n7j/include/tiny-cuda-nn/cuda_graph.h(139): error: identifier "cudaGraphExecUpdate" is undefined
  
  /tmp/pip-req-build-10mg9n7j/include/tiny-cuda-nn/cuda_graph.h(142): error: identifier "cudaGraphExecUpdateSuccess" is undefined
  
  /tmp/pip-req-build-10mg9n7j/include/tiny-cuda-nn/gpu_memory.h(689): error: identifier "CUmemGenericAllocationHandle" is undefined
  
  /usr/include/c++/8/bits/ptr_traits.h(114): error: static assertion failed with "pointer type defines element_type or is like SomePointer<T, Args>"
            detected during:`

Pages of that error. The same if i compile the downloaded code. It's just begin with "__hmax is underfinded" too.

I tried gcc/g++9, don't work.
If there's some operational mistakes, please let me know. Best regards.

@MrHandsomeljn
Copy link
Author

MrHandsomeljn commented Aug 8, 2024

image
done!
check your path, like this:

  • CUDA_HOME=/usr/local/cuda-11.8
  • GCC_HOME=/usr/bin/gcc-9
  • LD_LIBRARY_PATH=${GCC_HOME}/lib64:${CUDA_HOME}/lib64:${CUDA_HOME}/extras/CUPTI/lib64

NOTE: cuda-nvcc installed by conda is not supported
Recommend writing it in readme. Waste of time

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant