Skip to content

Sparse Single Sweep LiDAR Point Cloud Segmentation via Learning Contextual Shape Priors from Scene Completion (AAAI 2021)

License

Notifications You must be signed in to change notification settings

less-lab-uva/JS3C-Net

 
 

Repository files navigation

Dockerization

This is a dockerization of JS3C-Net. As detailed below, the project depends on Ubuntu 16.04, Pytorch 1.3.1, and several third party libraries. The Dockerfile will install all of the dependencies, including a few hot-fixes. However, there is an incompatibility between the requirements and spconvv1.0. As installed, the tests bundled in the docker image at /spconv/tests/test_conv.py will not all pass on the GPU, but will pass on the CPU. It is unclear if this affects the results of JS3C-Net when run on the GPU. However, to disable the GPU, pass --gpus -1 when running the test scripts.

Building the Docker image

  1. Because the image needs access to CUDA while building, first apply the fix described in this issue comment.
  • Edit/create /etc/docker/daemon.json and set the following contents
{
    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "default-runtime": "nvidia"
}
  • Install nvidia-container-runtime
sudo apt-get install nvidia-container-runtime
  • sudo systemctl restart docker.service
  1. In some instances, it appears that the install process cannot determine the GPU architecture. This can be resolved by uncommenting line 51 of the Dockerfile and setting the arch to the appropriate value for your GPU which you can lookup here. For example, 6.1 is the arch for the GTX 10X0 line of GPUs.
RUN TORCH_CUDA_ARCH_LIST="6.1"

JS3C-Net

Sparse Single Sweep LiDAR Point Cloud Segmentation via Learning Contextual Shape Priors from Scene Completion (AAAI2021)

This repository is for JS3C-Net introduced in the following AAAI-2021 paper [arxiv paper]

Xu Yan, Jiantao Gao, Jie Li, Ruimao Zhang, Zhen Li*, Rui Huang and Shuguang Cui, "Sparse Single Sweep LiDAR Point Cloud Segmentation via Learning Contextual Shape Priors from Scene Completion".

  • Semantic Segmentation and Semantic Scene Completion:

If you find our work useful in your research, please consider citing:

@inproceedings{yan2021sparse,
  title={Sparse Single Sweep LiDAR Point Cloud Segmentation via Learning Contextual Shape Priors from Scene Completion},
  author={Yan, Xu and Gao, Jiantao and Li, Jie and Zhang, Ruimao and Li, Zhen and Huang, Rui and Cui, Shuguang},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={35},
  number={4},
  pages={3101--3109},
  year={2021}
}

Getting Started

Set up

Clone the repository:

git clone https://github.com/yanx27/JS3C-Net.git

Installation instructions for Ubuntu 16.04:

  • Make sure CUDA and cuDNN are installed. Only this configurations has been tested:
    • Python 3.6.9, Pytorch 1.3.1, CUDA 10.1;
  • Compile the customized operators by sh complile.sh in /lib.
  • Install spconv1.0 in /lib/spconv. We use the same version with PointGroup, you can install it according to the instruction. Higher version spconv may cause issues.

Data Preparation

  • SemanticKITTI and SemanticPOSS datasets can be found in semantickitti-page and semanticposs-page.
  • Download the files related to semantic segmentation and extract everything into the same folder.
  • Use voxelizer generate ground truths of semantic scene completion, where following parameters are used. We provide pre-processed SemanticPOSS SSC labels here.
min range: 2.5
max range: 70
future scans: 70
min extent: [0, -25.6, -2]
max extent: [51.2, 25.6,  4.4]
voxel size: 0.2
  • Finally, the dataset folder should be organized as follows.
SemanticKITTI(POSS)
├── dataset
│   ├── sequences
│   │  ├── 00
│   │  │  ├── labels
│   │  │  ├── velodyne
│   │  │  ├── voxels
│   │  │  ├── [OTHER FILES OR FOLDERS]
│   │  ├── 01
│   │  ├── ... ...

  • Note that the data for official SemanticKITTI SSC benchmark only contains 1/5 of the whole sequence and they provide all extracted SSC data for the training set here.
  • (New) In this repo, we use old version of SemanticKITTI, and there is a bug of generating SSC data contains a wrong shift on upwards direction (see issue). Therefore, we add an additional shifting to align their old version dataset here, and if you use the newest version of data, you can delete it. Also, you can check the alignment ratio by using --debug.

SemanticKITTI

Training

Run the following command to start the training. Output (logs) will be redirected to ./logs/JS3C-Net-kitti/. You can ignore this step if you want to use our pretrained model in ./logs/JS3C-Net-kitti/.

$ python train.py --gpu 0 --log_dir JS3C-Net-kitti --config opt/JS3C_default_kitti.yaml

Evaluation Semantic Segmentation

Run the following command to evaluate model on evaluation or test dataset

$ python test_kitti_segment.py --log_dir JS3C-Net-kitti --gpu 0 --dataset [val/test]

Evaluation Semantic Scene Completion

Run the following command to evaluate model on evaluation or test dataset

$ python test_kitti_ssc.py --log_dir JS3C-Net-kitti --gpu 0 --dataset [val/test]

SemanticPOSS

Results on SemanticPOSS can be easily obtained by

$ python train.py --gpu 0 --log_dir JS3C-Net-POSS --config opt/JS3C_default_POSS.yaml
$ python test_poss_segment.py --gpu 0 --log_dir JS3C-Net-POSS

Pretrained Model

We trained our model on a single Nvidia Tesla V100 GPU with batch size 6. If you want to train on the TITAN GPU, you can choose batch size as 2. Please modify dataset_dir in args.txt to your path.

Model #Param Segmentation Completion Checkpoint
JS3C-Net 2.69M 66.0 56.6 18.5MB

Results on SemanticKITTI Benchmark

Quantitative results on SemanticKITTI Benchmark at the submisison time.

Acknowledgement

This project is not possible without multiple great opensourced codebases.

License

This repository is released under MIT License (see LICENSE file for details).

About

Sparse Single Sweep LiDAR Point Cloud Segmentation via Learning Contextual Shape Priors from Scene Completion (AAAI 2021)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 57.0%
  • Python 29.1%
  • Cuda 12.3%
  • C 0.5%
  • Cython 0.4%
  • CMake 0.4%
  • Other 0.3%