Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training Setting #16

Open
Mmmofan opened this issue Jul 22, 2019 · 1 comment
Open

Training Setting #16

Mmmofan opened this issue Jul 22, 2019 · 1 comment

Comments

@Mmmofan
Copy link

Mmmofan commented Jul 22, 2019

Hi, your work was excellent and I'm tring to reimplement it as I can understand it deeperly.

I have one question that you said in your paper "Learning rate decreases half for every 200 epochs", and you trian RDN "takes 1 day with a Titan Xp GPU for 200 epochs", did that mean you didn't half down learning rate in whole training? Cause as far as I understand you just train your net for 200 epochs.

Btw, do you think DATA AUGMENTATION is necessary for SR tasks? if input always a patch, 800 images of DIV2K can produce enough patches for 200 iterations of 1000 steps per iteration

Looking forward your reply!

@softcdzy
Copy link

I have the same Question.In this paper "We randomly augment the patches by flipping horizontally or vertically and rotating 90◦.", add the original 800images, total =800x3+800=3200 images. An epoch contain 200 iterations (3200/16), minibatch=16. but " 1,000 iterations of back-propagation constitute an epoch.",said in this paper.
One day ,I see a paper named "Dual-Path Recurrent Network for Image Super-Resolution",said in his Implementation Details," We randomly augment the patches by flipping horizontally or vertically and rotating 90◦. 200 iterations of back-propagation constitute an epoch."
Image of Dual-Path Recurrent Network

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants