This repository contains the code for the paper "Semi-Supervised Online Continual Learning for 3D Object Detection in Mobile Robotics."
Continual learning addresses the challenge of acquiring and retaining knowledge over time across multiple tasks and environments. Previous research primarily focuses on offline settings where models learn through increasing tasks from samples paired with ground truth annotations. In this work, we focus on an unsolved, challenging, yet practical scenario: specifically, semi-supervised online continual learning in autonomous driving and mobile robotics. In our settings, models are tasked with learning new distributions from streaming unlabeled samples and performing 3D object detection as soon as the LiDAR point cloud arrives. Additionally, we conducted experiments on both the KITTI dataset, our newly built IUSL dataset, and the Canadian Adverse Driving Conditions (CADC) dataset. The results indicate that our method achieves a balance between rapid adaptation and knowledge retention, showcasing its effectiveness in the dynamic and complex environment of autonomous driving and mobile robotics.
- The collected IUSL dataset can be get through Baidu Netdisk code: iusl.
We use the following implementations:
- PointPillars from mmdetection3d v1.0.0rc6.
- The powerful streaming learning classifier AMF, implemented by river.
- Patchwork++ to remove ground points (repository).
- The pretrained YOLOv8 as the image detector (repository).
-
Clone the repository:
git clone https://github.com/npu-ius-lab/OCL3D.git catkin_ws/src
-
Navigate to the workspace and build:
cd catkin_ws catkin_make
-
We provide two scripts to run the OCL3D method:
-
Run OCL3D on the KITTI tracking dataset using PointNet features and PointPillars:
./run_kitti_pillars_pointnet.sh
-
Run OCL3D on the IUSL dataset using handcrafted features:
./run_iusl_hand.sh
-
Before running the script, please carefully check the ROS workspace path for each package.
-
Patchwork++ cannot be compiled with OCL3D in the same workspace. Please place them in two separate workspaces.