Skip to content

[AAAI 2024] BEV-MAE: Bird's Eye View Masked Autoencoders for Point Cloud Pre-training in Autonomous Driving Scenarios

Notifications You must be signed in to change notification settings

VDIGPKU/BEV-MAE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BEV-MAE: Bird's Eye View Masked Autoencoders for Point Cloud Pre-training in Autonomous Driving Scenarios

This is the official implementation of BEV-MAE.

Introduction

Existing LiDAR-based 3D object detection methods for autonomous driving scenarios mainly adopt the training-from-scratch paradigm. Unfortunately, this paradigm heavily relies on large-scale labeled data, whose collection can be expensive and time-consuming. Self-supervised pre-training is an effective and desirable way to alleviate this dependence on extensive annotated data. In this work, we present BEV-MAE, an efficient masked autoencoder pre-training framework for LiDAR-based 3D object detection in autonomous driving. Specifically, we propose a bird's eye view (BEV) guided masking strategy to guide the 3D encoder learning feature representation in a BEV perspective and avoid complex decoder design during pre-training. Furthermore, we introduce a learnable point token to maintain a consistent receptive field size of the 3D encoder with fine-tuning for masked point cloud inputs. Based on the property of outdoor point clouds in autonomous driving scenarios, \textit{i.e.}, the point clouds of distant objects are more sparse, we propose point density prediction to enable the 3D encoder to learn location information, which is essential for object detection.

Model

We release the pre-training weights of VoxelNet on Waymo dataset.

pre-trained 3D backbone Dataset Weights
VoxelNet Waymo (20% data) Google_drive
VoxelNet Waymo (full data) Google_drive

Our code is base on OpenPCDet (0.5 version). To use our pre-trained weights, please refer to INSTALL.md for installation and follow the instructions in GETTING_STARTED.md to train the model.

Training

See the scripts in tools/run.sh

Acknowledgements

BEV-MAE is based on OpenPCDet. It is also greatly inspired by the open-source code Occupancy-MAE.

Citation

If BEV-MAE is useful or relevant to your research, please kindly recognize our contributions by citing our paper:

@inproceedings{lin2024bevmae,
  title={BEV-MAE: Bird's Eye View Masked Autoencoders for Point Cloud Pre-training in Autonomous Driving Scenarios},
  author={Lin, Zhiwei and Wang, Yongtao and Qi, Shengxiang and Dong, Nan and Yang, Ming-Hsuan},
  booktitle={Proceedings of the AAAI conference on artificial intelligence},
  year={2024}
}

Contact Us

If you have any problem about this work, please feel free to reach us out at [email protected].

The project is only free for academic research purposes, but needs authorization for commerce. For commerce permission, please contact [email protected].

About

[AAAI 2024] BEV-MAE: Bird's Eye View Masked Autoencoders for Point Cloud Pre-training in Autonomous Driving Scenarios

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages