python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 basicsr/train.py -opt options/train/train_RGT_x4.yml --launcher pytorch
-nproc_per_node=1 для nvidia A100
RGT_x4 результат:
Одна эпоха обучается +- 9 минут
Большой недостаток: расход памяти. Для апскейла 600x600 в x4 требуется ~31 Гб
- Python 3.8
- PyTorch 1.9.0
- NVIDIA GPU + CUDA
# Clone the github repo and go to the default directory 'RGT'.
git clone https://github.com/zhengchen1999/RGT.git
conda create -n RGT python=3.8
conda activate RGT
pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html
pip install six
python setup.py develop
Used training and testing sets can be downloaded as follows:
Training Set | Testing Set | Visual Results |
---|---|---|
DIV2K (800 training images, 100 validation images) + Flickr2K (2650 images) [complete training dataset DF2K: Google Drive / Baidu Disk] | Set5 + Set14 + BSD100 + Urban100 + Manga109 [complete testing dataset: Google Drive / Baidu Disk] | Google Drive / Baidu Disk |
Download training and testing datasets and put them into the corresponding folders of datasets/
. See datasets for the detail of the directory structure.
Method | Params (M) | FLOPs (G) | PSNR (dB) | SSIM | Model Zoo | Visual Results |
---|---|---|---|---|---|---|
RGT-S | 10.20 | 193.08 | 27.89 | 0.8347 | Google Drive / Baidu Disk | Google Drive / Baidu Disk |
RGT | 13.37 | 251.07 | 27.98 | 0.8369 | Google Drive / Baidu Disk | Google Drive / Baidu Disk |
The performance is reported on Urban100 (x4). Output size of FLOPs is 3×512×512.
-
Download training (DF2K, already processed) and testing (Set5, Set14, BSD100, Urban100, Manga109, already processed) datasets, place them in
datasets/
. -
Run the following scripts. The training configuration is in
options/train/
.# RGT-S, input=64x64, 4 GPUs python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 basicsr/train.py -opt options/train/train_RGT_S_x2.yml --launcher pytorch python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 basicsr/train.py -opt options/train/train_RGT_S_x3.yml --launcher pytorch python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 basicsr/train.py -opt options/train/train_RGT_S_x4.yml --launcher pytorch # RGT, input=64x64, 4 GPUs python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 basicsr/train.py -opt options/train/train_RGT_x2.yml --launcher pytorch python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 basicsr/train.py -opt options/train/train_RGT_x3.yml --launcher pytorch python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 basicsr/train.py -opt options/train/train_RGT_x4.yml --launcher pytorch
-
The training experiment is in
experiments/
.
-
Download the pre-trained models and place them in
experiments/pretrained_models/
.We provide pre-trained models for image SR: RGT-S and RGT (x2, x3, x4).
-
Download testing (Set5, Set14, BSD100, Urban100, Manga109) datasets, place them in
datasets/
. -
Run the following scripts. The testing configuration is in
options/test/
(e.g., test_RGT_x2.yml).Note 1: You can set
use_chop: True
(default: False) in YML to chop the image for testing.# No self-ensemble # RGT-S, reproduces results in Table 2 of the main paper python basicsr/test.py -opt options/test/test_RGT_S_x2.yml python basicsr/test.py -opt options/test/test_RGT_S_x3.yml python basicsr/test.py -opt options/test/test_RGT_S_x4.yml # RGT, reproduces results in Table 2 of the main paper python basicsr/test.py -opt options/test/test_RGT_x2.yml python basicsr/test.py -opt options/test/test_RGT_x3.yml python basicsr/test.py -opt options/test/test_RGT_x4.yml
-
The output is in
results/
.
-
Download the pre-trained models and place them in
experiments/pretrained_models/
.We provide pre-trained models for image SR: RGT-S and RGT (x2, x3, x4).
-
Put your dataset (single LR images) in
datasets/single
. Some test images are in this folder. -
Run the following scripts. The testing configuration is in
options/test/
(e.g., test_single_x2.yml).Note 1: The default model is RGT. You can use other models like RGT-S by modifying the YML.
Note 2: You can set
use_chop: True
(default: False) in YML to chop the image for testing.# Test on your dataset python basicsr/test.py -opt options/test/test_single_x2.yml python basicsr/test.py -opt options/test/test_single_x3.yml python basicsr/test.py -opt options/test/test_single_x4.yml
-
The output is in
results/
.
We achieved state-of-the-art performance. Detailed results can be found in the paper.
Visual Comparison (click to expand)
- results in Figure 6 of the main paper
- results in Figure 4 of the supplementary material
- results in Figure 5 of the supplementary material
If you find the code helpful in your resarch or work, please cite the following paper(s).
@inproceedings{chen2024recursive,
title={Recursive Generalization Transformer for Image Super-Resolution},
author={Chen, Zheng and Zhang, Yulun and Gu, Jinjin and Kong, Linghe and Yang, Xiaokang},
booktitle={ICLR},
year={2024}
}
This code is built on BasicSR.