Skip to content

Latest commit

 

History

History
339 lines (250 loc) · 11.7 KB

README.md

File metadata and controls

339 lines (250 loc) · 11.7 KB

SpectralGV

This repository is the official open-source implementation of the paper SpectralGV, RA-L 2023:

Spectral Geometric Verification: Re-Ranking Point Cloud Retrieval for Metric Localization
Kavisha Vidanapathirana, Peyman Moghadam, Sridha Sridharan, Clinton Fookes
IEEE Robotics and Automation Letters (RA-L), Volume 8, Issue 5, May 2023. arXiv, IEEEXplore

This repository contains the code for:

  • A quick demo of SpectralGV (without the need for datasets or architecture dependencies).
  • Integration of SpectralGV re-ranking with 3 open-source metric localization architectures (EgoNN, LCDNet and LoGG3D-Net).
  • Evaluation of place recognition and metric localization with and without re-ranking on 5 open-source datasets (MulRan, Apollo-Southbay, KITTI, ALITA and KITTI-360).

Method overview

SpectralGV is an efficient spectral method for geometric verification based re-ranking of point clouds. SpectralGV allows integration with any metric-localization architecture that extracts both local and global features for a given point cloud, without modifying the architectures and without further training of learning-based methods.

Demo

Set up base environment
  • Create conda environment with python:
conda create --name sgv_env python=3.9.4
conda activate sgv_env
  • Install PyTorch with suitable cudatoolkit version. See here:
pip3 install torch torchvision torchaudio
# Make sure the pytorch cuda version matches your output of 'nvcc --version'
pip install open3d
  • Test installation using:
python -c "import torch ; import open3d as o3d ; print(torch.cuda.is_available())"

Now you can run the quick demo:

Demo

This demo re-creates the results of Tab. 2 and Fig. 3 in our paper on the KITTI-360 09 dataset. It outputs results for place recognition both with and without re-ranking using SpectralGV, RANSAC-GV and alpha-QE.

  • Download the demo data (~75 MB) from Dropbox here:
cd demo
wget --output-document demo_pickles.zip https://dl.dropboxusercontent.com/s/4elvyix9pp36469/demo_pickles.zip?dl=0
unzip demo_pickles.zip

Run the demo:

  • SpectralGV
python demo_spectral_gv.py --n_topk 2
python demo_spectral_gv.py --n_topk 20
python demo_spectral_gv.py --n_topk 200
  • RANSAC-GV
python demo_ransac_gv.py --n_topk 2
python demo_ransac_gv.py --n_topk 20
python demo_ransac_gv.py --n_topk 200
  • alpha-QE
python demo_alpha_qe.py --n_topk 2
python demo_alpha_qe.py --n_topk 20
python demo_alpha_qe.py --n_topk 200

Observations:

  • The 2 geometric verifciation methods (SpectralGV and RANSAC-GV) show increasing performance with increasing n_topk.
  • alpha-QE shows decreasing performance with increasing n_topk and is therefore not suitable for point cloud re-ranking.
  • Out of the 2 geometric verifciation methods, RANSAC-GV is inefficient at high n_topk values, thus limiting it's practical use. SpectralGV maintains almost constant runtime.

Prerequisites

Environment dependencies

This project has been tested on a system with Ubuntu 22.04 and an RTX 2080. Set up the requirements as follows:

  • Please set up the base conda environment and test the quick demo above.
  • To recreate other results in the paper, the following dependencies are required:
Add dependencies for LoGG3D-Net:
pip install torchpack
  • Install torchsparse-1.4.0
sudo apt-get install libsparsehash-dev
pip install --upgrade git+https://github.com/mit-han-lab/[email protected]
conda install mpi4py
conda install openmpi
  • Test installation using:
python -c "import torch ; import torchsparse ; print('OK')"

Note: If stuck, see here for more details: https://github.com/csiro-robotics/LoGG3D-Net

Add dependencies for EgoNN:
conda install openblas-devel -c anaconda
pip install -U git+https://github.com/NVIDIA/MinkowskiEngine -v --no-deps --install-option="--blas_include_dirs=${CONDA_PREFIX}/include" --install-option="--blas=openblas"
  • Install other dependencies:
pip install pytorch_metric_learning python-lzf wandb
  • Test installation using:
python -c "import torch ; import MinkowskiEngine as ME ; print('OK')"

Note: If stuck, see here for more details: https://github.com/jac99/Egonn

Add dependencies for LCDNet:

Note: You will need to create a separate conda environment for LCDNet. See here for details: https://github.com/robot-learning-freiburg/LCDNet


Datasets

This project supports the following 5 open-source LiDAR datasets:

MulRan:

We use the sequences Sejong and DCC.

  • Download the MulRan dataset: ground truth data (*.csv) and LiDAR point clouds (Ouster.zip).
Apollo Southbay:

SunnyvaleBigLoop trajectory is used for evaluation, other 5 trajectories (BaylandsToSeafood, ColumbiaPark, Highway237, MathildaAVE, SanJoseDowntown) are used for training.

  • Download the Apollo dataset.
KITTI odometry:

We use the improved ground truth poses provided with the SemanticKITTI dataset.

  • Download the SemanticKITTI dataset (velodyne point clouds and calibration data for poses).
ALITA:

We evaluate on the data released at the ICRA 2022 UGV Challenge and use the validation sequence 5.

  • Download the ALITA dataset.
KITTI 360:
  • Download the KITTI-360 dataset (raw velodyne scans, calibrations and vehicle poses)

Generate test sets

Our test set for each dataset is defined in pickle format.

Genrate pickles for the 5 datasets:

We follow the pickle generation convention of EgoNN.

cd datasets/mulran
# For Sejong:
python generate_evaluation_sets.py --dataset_root <mulran_dataset_root_path>
# For DCC:
python generate_evaluation_sets.py --dataset_root <mulran_dataset_root_path>

cd ../southbay
python generate_evaluation_sets.py --dataset_root <apollo_southbay_dataset_root_path>

cd ../kitti
python generate_evaluation_sets.py --dataset_root <kitti_dataset_root_path>

cd ../alita
python generate_evaluation_sets.py --dataset_root <alita_dataset_root_path>

cd ../kitti360
python generate_evaluation_sets.py --dataset_root <kitti360_dataset_root_path>

Evaluation

This section re-creates the results of Tab. 3, Tab. 4 and Tab. 5 in our paper.

We integrate SpectralGV with the following architectures:

EgoNN + SpectralGV
  • Clone the EgoNN codebase into evaluation/EgoNN/.
cd evaluation/EgoNN
git clone https://github.com/jac99/Egonn.git
  • Copy our re-ranking eval script into the EgoNN code base:
cp -r SGV_EgoNN/ Egonn/eval/
cd Egonn/eval/SGV_EgoNN/
  • Evaluate place recognition and metric localization with and without SpectralGV re-ranking:
python eval_egonn_sgv.py --dataset_type <dataset_eg_'kitti'> --dataset_root <dataset_root_path>
LoGG3D-Net + SpectralGV
  • Download the pre-trained model (~103 MB) from Dropbox here:
cd evaluation/LoGG3D-Net/
wget --output-document logg3d.pth https://dl.dropboxusercontent.com/s/2mghsmkbz8p7ntx/logg3d.pth?dl=0
  • Evaluate place recognition and metric localization with and without SpectralGV re-ranking:
python eval_logg3d_sgv.py --dataset_type <dataset_eg_'kitti'> --dataset_root <dataset_root_path>

Note: LoGG3D-Net does not support the parallel implementation fo SpectralGV as it outputs a varying number of local features/points for each point cloud.

LCDNet + SpectralGV
  • Clone the LCDNet codebase into evaluation/LCDNet/.
cd evaluation/LCDNet
git clone https://github.com/robot-learning-freiburg/LCDNet.git
  • Copy our re-ranking eval script into the EgoNN code base:
cp -r SGV_LCDNet/ LCDNet/evaluation/
cd LCDNet/
  • Download the pre-trained model (~142 MB) from Dropbox here:
wget --output-document lcdnet.tar https://dl.dropboxusercontent.com/s/52sis2grvxias7u/lcdnet.tar?dl=0
mkdir checkpoints && mv lcdnet.tar ./checkpoints/
  • Evaluate place recognition and metric localization with and without SpectralGV re-ranking:
python evaluation/SGV_LCDNet/eval_lcdnet_sgv.py --dataset_type <dataset_eg_'kitti'> --dataset_root <dataset_root_path>


Citation

If you find this work useful in your research, please cite:

@article{vidanapathirana2023sgv,
  author={Vidanapathirana, Kavisha and Moghadam, Peyman and Sridharan, Sridha and Fookes, Clinton},
  journal={IEEE Robotics and Automation Letters}, 
  title={Spectral Geometric Verification: Re-Ranking Point Cloud Retrieval for Metric Localization}, 
  year={2023},
  publisher={IEEE},
  volume={8},
  number={5},
  pages={2494-2501},
  doi={10.1109/LRA.2023.3255560}}

Acknowledgement

Functions from 3rd party have been acknowledged at the respective function definitions or readme files.

SpectralGV was mainly inspired by SpectralMatching, PointDSC and SC2_PCR. The evaluation scripts used in this project are adapted from EgoNN.

Contact

For questions/feedback,