Skip to content

Vinayak-VG/GAURA

Repository files navigation

GAURA: Generalizable Approach for Unified Restoration and Rendering of Arbitrary Views (ECCV 2024)

Vinayak Gupta1*, Rongali Girish1*, Mukund Varma T2*, Ayush Tewari3, Kaushik Mitra1,

1 Indian Institute of Technology Madras, 2 University of California, San Diego, 3 Massachusetts Institute of Technology, Cambridge

* Equal Contributions

Project Page | Paper

This repository is built based on GNT's offical repository

  • News! GAURA is accepted at ECCV 2024 🎉.

Introduction

Neural rendering methods can achieve near-photorealistic image synthesis of scenes from posed input images. However, when the images are imperfect, e.g., captured in very low-light conditions, state- of-the-art methods fail to reconstruct high-quality 3D scenes. Recent approaches have tried to address this limitation by modeling various degradation processes in the image formation model; however, this limits them to specific image degradations. In this paper, we propose a general- izable neural rendering method that can perform high-fidelity novel view synthesis under several degradations. Our method, GAURA, is learning- based and does not require any test-time scene-specific optimization. It is trained on a synthetic dataset that includes several degradation types. GAURA outperforms state-of-the-art methods on several benchmarks for low-light enhancement, dehazing, deraining, and on-par for motion deblurring. Further, our model can be efficiently fine-tuned to any new incoming degradation using minimal data. We thus demonstrate adapta- tion results on two unseen degradations, desnowing and removing defocus blur.

teaser

Installation

Clone this repository:

git clone https://github.com/Vinayak-VG/GAURA.git
cd GAURA/
git submodule update --init --recursive
cd data_generation/MiDAS/weights
wget https://github.com/isl-org/MiDaS/releases/download/v3_1/dpt_beit_large_512.pt
wget https://github.com/isl-org/MiDaS/releases/download/v3_1/dpt_swin2_large_384.pt

The code is tested with python 3.8, cuda == 11.1, pytorch == 1.10.1. Additionally dependencies include:

torchvision
ConfigArgParse
imageio
matplotlib
numpy
opencv_contrib_python
Pillow
scipy
imageio-ffmpeg
lpips
scikit-image

Datasets

Training Dataset (Synthetic Scenes)

We reuse the training, evaluation datasets from IBRNet. All datasets must be downloaded to a directory data/ within the project folder and must follow the below organization.

├──data/
    ├──ibrnet_collected_1/
    ├──ibrnet_collected_2/
    ├──real_iconic_noface/
    ├──nerf_llff_data/

We refer to IBRNet's repository to download and prepare data. For ease, we consolidate the instructions below:

mkdir data
cd data/

# IBRNet captures
gdown https://drive.google.com/uc?id=1rkzl3ecL3H0Xxf5WTyc2Swv30RIyr1R_
unzip ibrnet_collected.zip

# LLFF
gdown https://drive.google.com/uc?id=1ThgjloNt58ZdnEuiCeRf9tATJ-HI0b01
unzip real_iconic_noface.zip

## [IMPORTANT] remove scenes that appear in the test set
cd real_iconic_noface/
rm -rf data2_fernvlsb data2_hugetrike data2_trexsanta data3_orchid data5_leafscene data5_lotr data5_redflower data2_chesstable data2_colorfountain data4_shoerack data4_stove
cd ../
mkdir test_data/
mv real_iconic_noface/data2_chesstable test_data/
mv real_iconic_noface/data2_colorfountain test_data/
mv real_iconic_noface/data4_shoerack test_data/
mv real_iconic_noface/data4_stove test_data/

# LLFF dataset (eval)
gdown https://drive.google.com/uc?id=16VnMcF1KJYxN9QId6TClMsZRahHNMW5g
unzip nerf_llff_data.zip

Evaluation Dataset (Real Scenes)

# Low-Light Enhancement
## Aleth-NeRF Dataset
gdown https://drive.google.com/file/d/1orgKEGApjwCm6G8xaupwHKxMbT2s9IAG
tar -xvzf LOM_full.tar.gz

## LLNeRF Dataset
gdown https://drive.google.com/file/d/1RfdBe7xJbNnyOvq_B6_cBBbvNRLumtFu
tar -xvzf normal-light-scenes.tar.gz

# Haze
## REVIDE-HAZE Dataset
gdown https://drive.google.com/file/d/1MYaVMUtcfqXeZpnbsfoJ2JBcpZUUlXGg
Note: We only pick these scenes - J005, L004, L008 and W002 for evaluation.

# Motion Blur
## Deblur-NeRF Dataset
https://drive.google.com/drive/folders/1X-NfxsZXWH5c4vjaKVjlFnEQU-l54ag_?usp=sharing

# Defocus Blur
## Deblur-NeRF Dataset
https://drive.google.com/drive/folders/1qXSgGWUbgIfKdNK16AytEHvxO0lRZ0K5?usp=drive_link

For Rain and Snow, we manually collected videos from Youtube and ran COLMAP to obtain the poses. These scenes don't have corresponding ground truth.

Data Preparation

For Haze and Defocus degradations, we require depth maps that are precomputed and saved to save time while training.

python3 data_generation/generate_depths.py --data_dir data/

Usage

Training

If you wish to start training from the pre-trained weights of GNT, then you can create a folder in out/ with the name of the experiment and then put the pre-trained weights in the folder.

python3 -W ignore train.py --config configs/transibr_bigger_full.txt --expname generalise_expt --n_iters 400000 --N_rand 512 --i_img 10000 --i_weights 10000 --typeofmodel yesstrgth_dyndeg_emb_wgt_strenc -- pretrained_allweights --ft_corrup gen --train_dataset llff_dyn+ibrnet_collected_dyn --eval_dataset llff_test_dyn --sample_mode center

Evaluation

You could also download our pre-train weights for direct model evaluation from (google drive)

To evaluate Low-Light Enhancement Results on Real Data

# On Aleth-NeRF Dataset
bash eval_scripts/eval_aleth.sh 

# On LLNeRF Dataset
bash eval_scripts/eval_llnerf.sh

To evaluate Motion Blur Removal Results on Deblur-NeRF Real Dataset

bash eval_scripts/eval_real_motion.sh

To evaluate Haze Removal Results on REVIDE-Haze Real Dataset

bash eval_scripts/eval_revidehaze.sh 

The code has been recently tidied up for release and could perhaps contain tiny bugs. Please feel free to open an issue.

Cite this work

If you find our work/code implementation useful for your own research, please cite our paper.

@article{gupta2024gsn,
  title={GAURA: Generalizable Approach for Unified Restoration and Rendering of Arbitrary Views},
  author={Gupta, Vinayak and Girish, Rongali and Varma, Mukund T and Tewari, Ayush and Mitra, Kaushik},
  journal={arXiv preprint arXiv:2402.04632},
  year={2024}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published