-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update report Issue #84 #162
base: master
Are you sure you want to change the base?
Conversation
Hi, please find below a review submitted by one of the reviewers: Score: 6 This reproducibility work focuses mostly on understanding the theoretical contributions provided in the original paper which is understandable given the nature of the original paper is two better understand the role of 0-GP in GAN optimization. Thus this work provides a clear introduction and review of preliminary material and one of the main propositions provided in the original paper. While these sections are interesting they do not provide any further insight into better understanding the paper and are mostly summarizations. As the original paper contains a very detailed outline of network architectures as well as exact hyperparameter settings it makes for easy testing for reproducibility purposes. With that said, this reproducibility work clearly lacks one of the main experiments on a mixture of 8 Gaussians presented in the original work. As this is a toy and synthetic experiment often used in many other GAN papers it is unclear why these experiments were omitted in this reproducibility effort. Further, with regards to the MNIST experiments the results for 0-GP as proposed in the original paper are qualitatively much better than this reproducibility work. There is no attempt to explain why this is the case or possible ablation studies with different hyperparameter setting to better understand the cause of such differences. Finally, this work chooses to tackle CIFAR-10 as their third image dataset, one which was not considered the original authors. The authors explain their choice of CIFAR-10 over ImageNet as due to a lack of computational resources which this reviewer finds as a credible explanation. However, the number of experiments and analysis on CIFAR-10 is again minimal. It would have been interesting to see how robust 0-GP is compared to other GAN’s with regards to different choices in hyperparameters and settings. Particularly, questions like how robust are D and G learning rates in a TTUR schedule and number of D updates vs. G updates are interesting to consider as these are typical design decisions taken when training many different other GANs. Confidence : 4 |
Hi, please find below a review submitted by one of the reviewers: Score: 6 |
Hi, please find below a review submitted by one of the reviewers: Score: 8 The report is well-written with emphasis on both the theoretical and experimental parts of the reviewed paper. The authors explain the generalization problems with GANs and the various gradient penalty schemes to solve it. The motivation to use gradient penalty is clearly explained. The authors try the experiments on new datasets as well, which is a good way to verify the results.
NB: This TA review has been provided by the institution directly and authors have communicated with the reviewers regarding changes / updates. Confidence : 4 |
Issue #84
Title: Improving Generalization and Stability of Generative Adversarial Networks
Site: https://epfml17.hotcrp.com/paper/491
Link to code: https://github.com/wanhaozhou/Machine-Learning/blob/master/project2/Improving_Generalization_And_Stability_Of_GAN.ipynb