Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch norm in WGAN discriminator #6

Open
KhrystynaFaryna opened this issue Jun 18, 2020 · 1 comment
Open

Batch norm in WGAN discriminator #6

KhrystynaFaryna opened this issue Jun 18, 2020 · 1 comment

Comments

@KhrystynaFaryna
Copy link

Hi, I was wondering why did you guys decide to use batch norm in a critic of WGAN-GP. The paper on Improved training of WGAN(the one where gradient penalty is proposed) advises against it.
Thanks in advance!

@cyclomon
Copy link
Owner

cyclomon commented Jun 23, 2020

Hi,
Although we noticed that it is not a good choice of using BN in discriminator of WGAN-GP,
we found that using batchnorm stabilized training.
We also experimented with Instance Norm , LayerNorm, and without normalization, but we could not find a better model.
Now we moved to another project so we still don't know the exact reason why this works.
Maybe further experiments and analysis will be needed to figure it out.
Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants