Skip to content

Latest commit

 

History

History
87 lines (66 loc) · 3.58 KB

README.md

File metadata and controls

87 lines (66 loc) · 3.58 KB

3D Point Cloud Kernels

Pytorch CPU and CUDA kernels for spatial search and interpolation for 3D point clouds.

PyPI version Deploy Unittests

Installation

Update: we now provide precompiled Conda packages for the latest PyTorch/CUDA combinations (PyTorch >= 1.10.0). To install with conda:

conda install -c torch-points3d torch-points-kernels

Or, you can compile the wheel yourself for any PyTorch/CUDA combination (must have a matching installation of CUDA toolkit):

pip install torch-points-kernels

To force CUDA installation (for example on Docker builds) please use the flag FORCE_CUDA:

FORCE_CUDA=1 pip install torch-points-kernels

Usage

import torch
import torch_points_kernels.points_cuda

Build and test

python setup.py build_ext --inplace
python -m unittest

Troubleshooting

Compilation issues

Ensure that at least PyTorch 1.4.0 is installed and verify that cuda/bin and cuda/include are in your $PATH and $CPATH respectively, e.g.:

$ python -c "import torch; print(torch.__version__)"
>>> 1.4.0

$ echo $PATH
>>> /usr/local/cuda/bin:...

$ echo $CPATH
>>> /usr/local/cuda/include:...

On the compilation, if you have this error: error: cannot call member function 'void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() it means that your nvcc version is too old. The version must be at least 10.1.168. To check the version:

nvcc --version
>>> V10.1.168

Windows compilation

On Windows you may have this error when compiling:

error: member "torch::jit::detail::ModulePolicy::all_slots" may not be initialized
error: member "torch::jit::detail::ParameterPolicy::all_slots" may not be initialized
error: member "torch::jit::detail::BufferPolicy::all_slots" may not be initialized
error: member "torch::jit::detail::AttributePolicy::all_slots" may not be initialized

This requires you to edit some of your pytorch header files, use this script as a guide.

CUDA kernel failed : no kernel image is available for execution on the device

This can happen when trying to run the code on a different GPU than the one used to compile the torch-points-kernels library. Uninstall torch-points-kernels, clear cache, and reinstall after setting the TORCH_CUDA_ARCH_LIST environment variable. For example, for compiling with a Tesla T4 (Turing 7.5) and running the code on a Tesla V100 (Volta 7.0) use:

export TORCH_CUDA_ARCH_LIST="7.0;7.5"

See this useful chart for more architecture compatibility.

Projects using those kernels.

Pytorch Point Cloud Benchmark

Credit