Official PyTorch implementation of "Multi-Stage Raw Video Denoising with Adversarial Loss and Gradient Mask" Project | Paper
This codebase was developed and tested on Ubuntu with PyTorch 1.7.1 and CUDA 10.2, Python 3.8. To install PyTorch:
conda install pytorch==1.7.1 torchvision==0.8.2 cudatoolkit=10.2 -c pytorch
Set the dataset path and run:
python train.py --dir path/to/dataset
Run the following commmand for help / more options like batch size, sequence length etc.
python train.py --h
To get visualization of the training, you can run tensorboard from the project directory using the command:
tensorboard --logdir logs --port 6007
and then go to https://localhost:6007.
The evaluation scripts can be used to generate denoised videos on the CRVD dataset and our Synthetic Test Set. You can also download our CRVD results.
Set the dataset path and run:
python test_indoor.py
Set the dataset path and run:
python test_outdoor.py
Set the dataset path and run:
python test_synthetic.py
The synthetic test dataset was collected from YouTube channels Video Library - No copyright Footage, Le Monde en Vidéo and Underway, all under Creative Commons (CC) license.
@InProceedings{paliwal2021maskdenosing,
author={Paliwal, Avinash and Zeng, Libing and Kalantari, Nima Khademi},
booktitle={2021 IEEE International Conference on Computational Photography (ICCP)},
title={Multi-Stage Raw Video Denoising with Adversarial Loss and Gradient Mask},
year={2021},
pages={1-10}
}
Parts of training code are adopted from SPADE, RAFT, UPI and RViDeNet.