Skip to content

Latest commit

 

History

History
43 lines (31 loc) · 3.55 KB

README.md

File metadata and controls

43 lines (31 loc) · 3.55 KB

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

Abstract

In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.

Results and models

DCGAN 64x64, CelebA-Cropped
Models Dataset SWD MS-SSIM Config Download
DCGAN 64x64 MNIST (64x64) 21.16, 4.4, 8.41/11.32 0.1395 config model | log
DCGAN 64x64 CelebA-Cropped 8.93,10.53,50.32/23.26 0.2899 config model | log
DCGAN 64x64 LSUN-Bedroom 42.79, 34.55, 98.46/58.6 0.2095 config model | log

Citation

@article{radford2015unsupervised,
  title={Unsupervised representation learning with deep convolutional generative adversarial networks},
  author={Radford, Alec and Metz, Luke and Chintala, Soumith},
  journal={arXiv preprint arXiv:1511.06434},
  year={2015},
  url={https://arxiv.org/abs/1511.06434},
}