Code Reference I've referred to this page a lot. Implementing Serveral Networks in pytorch with studying it each papers. Welcome any advice with widely open arms.
Dataset | Image source | Download link |
---|---|---|
CelebA | Ziwei Liu et al. ICCV 2015 | link |
Set 5 | Bevilacqua et al. BMVC 2012 | link |
Set 14 | Zeyde et al. LNCS 2010 | link |
Urban 100 | Huang et al. CVPR 2015 | link |
BSD 100 | Martin et al. ICCV 2001 | link |
Sun-Hays 80 | Sun and Hays ICCP 2012 | link |
- Generative Adversarial Network
- Authors
- [Ian J. Goodfellow | Jean Pouget-Abadie | Mehdi Mirza | Bing Xu | David Warde-Farley | Sherjil Ozair | Aaron Courville | Yoshua Bengio]
- [Ian J. Goodfellow | Jean Pouget-Abadie | Mehdi Mirza | Bing Xu | David Warde-Farley | Sherjil Ozair | Aaron Courville | Yoshua Bengio]
- [Paper] | [Code]
- Deep Convolutional Generative Adversarial Networks
- Authors
- [Alec Radford | Luke Metz | Soumith Chintala]
- [Alec Radford | Luke Metz | Soumith Chintala]
- Conditional Generative Adversarial Nets
- Authors
- [Mehdi Mirza | Simon Osindero]
- [Mehdi Mirza | Simon Osindero]
[The cGAN's scheme]
-
The original GAN can only map one distribution to one output. So, the authors introduced that conditional GAN; Conditional probabilistic generative model. Concatenate condition Y to Disctribution X and Z.
- Least Squares Generative Adversarial Networks
- Authors
- [Xudong Mao | Qing Li | Haoran Xie | Raymond Y.K. Lau | Zhen Wang | Stephen Paul Smolley]
- [Xudong Mao | Qing Li | Haoran Xie | Raymond Y.K. Lau | Zhen Wang | Stephen Paul Smolley]
[The VanilaGANs Loss Function]
[The LSGANs Loss Function]
- The authors claim that VanilaGAN is UNSTABLE cause of the loss function. Breifly, minimizing the objective function of it suffers from vanishing gradients and it ends up with being hard to train the generator. To Resolve this problem, the authors argue "The least squares loss function will penalize the fake samples and pull them toward the decision boundary even though they are correctly classfied. Based on this porperty, LSGANs are able to generate samples that are closer to real data."
- [Paper] | [Code]
- Super-Resolution Generative Adversarial Networks
- Authors
- [Xudong Mao | Qing Li | Haoran Xie | Raymond Y.K. Lau | Zhen Wang | Stephen Paul Smolley]
- [Xudong Mao | Qing Li | Haoran Xie | Raymond Y.K. Lau | Zhen Wang | Stephen Paul Smolley]
[The SRGAN's scheme]
-
The optimization target of supervised SR algorithms is commonly the minimization of the MSE between the recovered HR image and the ground truth. So, the authors claim that in the reconstructed SR images is typically absent of texture detail. To Resolve this problem, they applied A novel perceptual loss using high-levl freature maps of the VGG networks. Total loss of SRGAN is sum of weighted Content Loss & Adversarial loss
| Low Resolution | High Resolution | Generated IMGS
- Wasserstein Generative Adversarial Networks
- Authors
- [Martin Arjovsky | Soumith Chintala | Léon Bottou]
- [Martin Arjovsky | Soumith Chintala | Léon Bottou]
[The Wasserstein Loss]
[Wasserstein GAN Training Algorithm]
-
The authors shows that the Earth-Mover(EM) distance might have nice properties somehow when optimized than JS (Jensen-Shannon Divergence) through Theorem 2.
- Wasserstein Generative Adversarial Networks with Gradient Penalty
- Authors
- [Ishaan Gulrajani | Faruk Ahmed | Martin Arjovsky | Vincent Dumoulin | Aaron Courville]
- [Ishaan Gulrajani | Faruk Ahmed | Martin Arjovsky | Vincent Dumoulin | Aaron Courville]
[The WGAN-GP's Training Algorithm]
-
The authors claim that WGAN made a great progress toward stable training of GANs, but sometimes cna still generate only poor samples or fail to converge and it is often due to the use of clipping in WGAN. So they proposed a penalizing the norm of gradient fo the critic w.r.t to its input.
-
The only difference to original WGAN is using gradient penalty instead of clipping the weights of critic.