Vinayak Gupta1, Rahul Goel2, Dhawal Sirikonda2, P. J. Narayanan2
1Indian Institute of Technology Madras, 2International Institute of Information Technology, Hyderabad
This repository is built based on GNT's offical repository
- News! GSN is accepted at AAAI 2024 🎉.
Traditional Radiance Field (RF) representations capture details of a specific scene and must be trained afresh on each scene. Semantic feature fields have been added to RFs to facilitate several segmentation tasks. Generalised RF representations learn the principles of view interpolation. A generalised RF can render new views of an unknown and untrained scene, given a few views. We present a way to distil feature fields into the generalised GNT representation. Our GSN representation generates new views of unseen scenes on the fly along with consistent, per-pixel semantic features. This enables multi-view segmentation of arbitrary new scenes. We show different semantic features being distilled into generalised RFs. Our multi-view segmentation results are on par with methods that use traditional RFs. GSN closes the gap between standard and generalisable RF methods significantly
Clone this repository:
git clone https://github.com/Vinayak-VG/GSN.git
cd GSN/
The code is tested with python 3.8, cuda == 11.1, pytorch == 1.10.1. Additionally dependencies include:
torchvision
ConfigArgParse
imageio
matplotlib
numpy
opencv_contrib_python
Pillow
scipy
imageio-ffmpeg
lpips
scikit-image
We reuse the training, evaluation datasets from IBRNet. All datasets must be downloaded to a directory data/
within the project folder and must follow the below organization.
├──data/
├──ibrnet_collected_1/
├──ibrnet_collected_2/
├──real_iconic_noface/
├──nerf_llff_data/
We refer to IBRNet's repository to download and prepare data. For ease, we consolidate the instructions below:
mkdir data
cd data/
# IBRNet captures
gdown https://drive.google.com/uc?id=1rkzl3ecL3H0Xxf5WTyc2Swv30RIyr1R_
unzip ibrnet_collected.zip
# LLFF
gdown https://drive.google.com/uc?id=1ThgjloNt58ZdnEuiCeRf9tATJ-HI0b01
unzip real_iconic_noface.zip
## [IMPORTANT] remove scenes that appear in the test set
cd real_iconic_noface/
rm -rf data2_fernvlsb data2_hugetrike data2_trexsanta data3_orchid data5_leafscene data5_lotr data5_redflower data2_chesstable data2_colorfountain data4_shoerack data4_stove
cd ../
mkdir test_data/
mv real_iconic_noface/data2_chesstable test_data/
mv real_iconic_noface/data2_colorfountain test_data/
mv real_iconic_noface/data4_shoerack test_data/
mv real_iconic_noface/data4_stove test_data/
# LLFF dataset (eval)
gdown https://drive.google.com/uc?id=16VnMcF1KJYxN9QId6TClMsZRahHNMW5g
unzip nerf_llff_data.zip
The code for feature extraction has been taken from N3F. Thanks to the original authors for providing it. Please follow the following instructions to prepare the features:
-
To download the DINO checkpoint, run the following command:
Ubuntu
cd feature_extractor bash download_dino.sh
-
To extract the DINO features and place them in the correct directory, run the following command. Note that we use the images downscaled by a factor of 8.
bash extract.sh
# Stage I
CUDA_VISIBLE_DEVICES=0 python3 train.py --config configs/transibr_full.txt --expname gsn --rootdir /home/vinayak/GSN/
# Stage II
CUDA_VISIBLE_DEVICES=0 python3 train.py --config configs/transibr_full.txt --expname gsn --rootdir /home/vinayak/GSN/ --dinofield --dino_dim 64 --folder_name DiNOFeats
You could also download our pre-train weights for direct model evaluation from (google drive)
python3 -W ignore eval_transibr.py --config configs/transibr_full.txt --expname gsn --run_val --chunk_size 500 --folder_name DiNOFeats --dinofield --eval_dataset test_data --eval_scenes data2_chesstable --render_stride 2 ---llffhold 4
The code has been recently tidied up for release and could perhaps contain tiny bugs. Please feel free to open an issue.
If you find our work / code implementation useful for your own research, please cite our paper.
@article{gupta2024gsn,
title={GSN: Generalisable Segmentation in Neural Radiance Field},
author={Gupta, Vinayak and Goel, Rahul and Dhawal, Sirikonda and Narayanan, PJ},
journal={arXiv preprint arXiv:2402.04632},
year={2024}
}