Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast
Guofan Fan, Zekun Qi, Wenkai Shi and Kaisheng Ma
This repository contains the implementation of the paper Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast.
For those interested, we update a pre-release code of the preprint paper. Be careful to use it because lacks code cleanup and may be modified in final version.
- 🎉 July, 2024: Point-GCC is accepted by ACMMM 2024, we will release offical version as soon, please check in homepage.
- 🔥 Oct, 2023: For those interested, we update a pre-release code of the preprint paper.
- 💥 Jun, 2023: Check out our previous work Language-Assisted 3D, ACT, and ReCon, which have been accepted by AAAI, ICLR, and ICML 2023 respectively.
If you find our work helpful for your research. Please consider citing our paper.
@article{point-gcc,
title={{Point-GCC:} Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast},
author={Fan, Guofan and Qi, Zekun and Shi, Wenkai and and Ma, Kaisheng},
journal={arXiv preprint arXiv:2305.19623},
year={2023}
}
Our code is based on MMDetection3D. Thanks for their wonderful work!
Point-GCC is released under the MIT License. See the LICENSE file for more details.