Runsong Zhu, Shi Qiu, Qianyi Wu, Ka-Hei Hui, Pheng-Ann Heng, Chi-Wing Fu
TL;DR: Our paper presents a novel "probabilistic" fusion method to lift 2D predictions to 3D for effective and robust instance segmentation, achieving SOTA performances.
You can download the Messy Rooms (MOS) dataset from here. For all other datasets, refer to the instructions provided in Panoptic-Lifting
we provide pretrained checkpoints for MOS dataset and you can download them from here.
Download the pretrained checkpoints and place them to ./code
. Then, run the following commands to evaluate the pretrained models:
cd code & python inference_test/MOS_covariance/covariance_001_clamp/bash_inference_training_view_official_v2_learned_covariance_v1.py --output_dir PCF_res --feature_dimension 7 --export_table_name PCF_res
cd code & bash train.sh
If you find this work useful in your research, please cite our paper:
@inproceedings{zhu2025pcf,
title={PCF-Lift: Panoptic Lifting by Probabilistic Contrastive Fusion},
author={Zhu, Runsong and Qiu, Shi and Wu, Qianyi and Hui, Ka-Hei and Heng, Pheng-Ann and Fu, Chi-Wing},
booktitle={European Conference on Computer Vision},
pages={92--108},
year={2025},
organization={Springer}
}
This code is based on Contrastive Lift, Panoptic-Lifting and TensoRF codebases. We thank the authors for releasing their code.