Skip to content

Implementation of the WAME (Weight-wise Adaptive learning rates with Moving average Estimator) optimization algorithm for TensorFlow version 2.0 or higher.

License

Notifications You must be signed in to change notification settings

justinbt1/WAME-Optimiser

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WAME Optimizer

Implementation of the WAME optimization algorithm as described in the paper Training Convolutional Networks with Weight-wise Adaptive Learning Rates by Mosca and Magoulas. Implemented as an optimizer class for TensorFlow 2.0 or higher.

Paper Abstract

Current state–of–the–art Deep Learning classification with Convolutional Neural Networks achieves very impressive results, which are, in some cases, close to human level performance. However, training these methods to their optimal performance requires very long training periods, usually by applying the Stochastic Gradient Descent method. We show that by applying more modern methods, which involve adapting a different learning rate for each weight rather than using a single, global, learning rate for the entire network, we are able to reach close to state–of–the–art performance on the same architectures, and improve the training time and accuracy.

Usage

The optimizer class is only compatible with tensorflow>=2.0.
Call WAME when compiling a TensorFlow model, see below example using the Keras API:

from tensorflow import keras
from wame import WAME

model = keras.models.Sequential([
  keras.layers.Flatten(input_shape=(28, 28)),
  keras.layers.Dense(128, activation='relu'),
  keras.layers.Dropout(0.2),
  keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer=WAME(), loss='sparse_categorical_crossentropy', metrics=['accuracy'])

Parameters

WAME(
  learning_rate=0.001, 
  alpha=0.9, 
  eta_plus=1.2, 
  eta_minus=0.1, 
  zeta_min=0.01, 
  zeta_max=100, 
  name='WAME', 
  **kwargs
) 

learning_rate

Initial learning rate.

float, default=0.001

alpha

Decay rate.

float, default=0.9

eta_plus

Eta plus value.

float, default=1.2

eta_minus

Eta minus value.

float, default=0.1

zeta_min

Minimmum clipping value for per-weight acceleration factor, zeta acceleration factor clipped below this value to avoid runaway effects.

float, default=0.1

zeta_max

Maximmum clipping value for per-weight acceleration factor, zeta acceleration factor clipped below this value to avoid runaway effects.

float, default=100

name

Optional name prefix for operations created when applying gradients.

str, default='WAME'

About

Implementation of the WAME (Weight-wise Adaptive learning rates with Moving average Estimator) optimization algorithm for TensorFlow version 2.0 or higher.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages