Skip to content

Latest commit

 

History

History
6 lines (4 loc) · 1.04 KB

README.md

File metadata and controls

6 lines (4 loc) · 1.04 KB

PADAMOptimizer

Implementation of PADAM Optimizer on the basis of research paper arXiv:1806.06763v1 [cs.LG]. Link: https://arxiv.org/pdf/1806.06763v1.pdf

Adaptive gradient methods, which adopt historical gradient information to automatically adjust the learning rate, have been observed to generalize worse than stochastic gradient descent (SGD) with momentum in training deep neural networks. This leaves how to close the generalization gap of adaptive gradient methods an open problem. In this work, we show that adaptive gradient methods such as Adam, Amsgrad, are sometimes “over adapted”. We design a new algorithm, called Partially adaptive momentum estimation method (Padam), which unifies the Adam/Amsgrad with SGD to achieve the best from both worlds. Experiments on standard benchmarks show that Padam can maintain fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks. These results would suggest practitioners pick up adaptive gradient methods once again for faster training of deep neural networks.