arXiv, Porject page, Paper, Video, Slide, Poster
BID task requires separating a superimposed image into constituent underlying images in a blind setting, that is, both the source components involved in mixing as well as the mixing mechanism are unknown.
We invite our community to explore the novel BID task, including discovering interesting areas of application, developing novel methods, extending the BID setting,and constructing benchmark datasets.
Blind Image Decomposition
Junlin Han, Weihao Li, Pengfei Fang, Chunyi Sun, Jie Hong, Ali Armin, Lars Petersson, Hongdong Li
DATA61-CSIRO and Australian National University
European Conference on Computer Vision (ECCV), 2022
Deraining (rain streak, snow, haze, raindrop):
Row 1-6 presents 6 cases of a same scene. The 6 cases are (1): rainstreak, (2): rain streak + snow, (3): rain streak + light haze, (4): rain streak + heavy haze, (5): rain streak + moderate haze + raindrop, (6)rain streak + snow + moderate haze + raindrop.
Joint shadow/reflection/watermark removal:
Python 3.7 or above.
For packages, see requirements.txt.
- Clone this repo:
git clone https://github.com/JunlinHan/BID.git
-
Install PyTorch 1.7 or above and other dependencies (e.g., torchvision, visdom, dominate, gputil).
For pip users, please type the command
pip install -r requirements.txt
.For Conda users, you can create a new Conda environment using
conda env create -f environment.yml
. (Recommend)We tested our code on both Windows and Ubuntu OS.
-
Download BID datasets: https://drive.google.com/drive/folders/1wUUKTiRAGVvelarhsjmZZ_1iBdBaM6Ka?usp=sharing
unzip the downloaded datasets, put them inside
./datasets/
. -
To use our dataset in your method/project, please refer to ./models for detailed usages (biden2-8_model for Task I, rain_model for Task II, jointremoval_model for Task III). The code can be easily transfered.
- Detailed instructions are provided at
./models/
. - To view training results and loss plots, run
python -m visdom.server
and click the URL http://localhost:8097.
Task I: Mixed image decomposition across multiple domains:
Train (biden n, where n is the maximum number of source components):
python train.py --dataroot ./datasets/image_decom --name biden2 --model biden2 --dataset_mode unaligned2
python train.py --dataroot ./datasets/image_decom --name biden3 --model biden3 --dataset_mode unaligned3
...
python train.py --dataroot ./datasets/image_decom --name biden8 --model biden8 --dataset_mode unaligned8
Test a single case (use n = 3 as an example):
Test a single case:
python test.py --dataroot ./datasets/image_decom --name biden3 --model biden3 --dataset_mode unaligned3 --test_input A
python test.py --dataroot ./datasets/image_decom --name biden3 --model biden3 --dataset_mode unaligned3 --test_input AB
... ane other cases. change test_input to the case you want.
Test all cases:
python test2.py --dataroot ./datasets/image_decom --name biden3 --model biden3 --dataset_mode unaligned3
Task II.A : Real-scenario deraining in driving:
Train:
python train.py --dataroot ./datasets/raina --name task2a --model raina --dataset_mode raina
Task II.B : Real-scenario deraining in general:
Train:
python train.py --dataroot ./datasets/rainb --name task2b --model rainb --dataset_mode rainb
Task III: Joint shadow/reflection/watermark removal:
Train:
python train.py --dataroot ./datasets/jointremoval_v1 --name task3_v1 --model jointremoval --dataset_mode jointremoval
or
python train.py --dataroot ./datasets/jointremoval_v2 --name task3_v2 --model jointremoval --dataset_mode jointremoval
The test results will be saved to an html file here: ./results/
.
We provide our pre-trained BIDeN models at: https://drive.google.com/drive/folders/1UBmdKZXYewJVXHT4dRaat4g8xZ61OyDF?usp=sharing
Download the pre-tained model, unzip it and put it inside ./checkpoints.
Example usage: Download the dataset of task II.A (rain in driving) and pretainred model of task II.A. Test the rain streak case.
python test.py --dataroot ./datasets/raina --name task2a --model raina --dataset_mode raina --test_input B
For FID score, use pytorch-fid.
For PSNR/SSIM/RMSE/NIQE/BRISQUE, see ./metrics/
.
See ./raindrop/
.
If you use our code or our results, please consider citing our paper. Thanks in advance!
@inproceedings{han2022bid,
title={Blind Image Decomposition},
author={Junlin Han and Weihao Li and Pengfei Fang and Chunyi Sun and Jie Hong and Mohammad Ali Armin and Lars Petersson and Hongdong Li},
booktitle={European Conference on Computer Vision (ECCV)},
year={2022}
}
[email protected] or [email protected]
Our code is developed based on DCLGAN and CUT. We thank the auhtors of MPRNet, perceptual-reflection-removal, Double-DIP, Deep-adversarial-decomposition for sharing their source code. We thank exposure-fusion-shadow-removal and ghost-free-shadow-removal for providing the source code and results. We thank pytorch-fid for FID computation.