This is an example of a simple 1-dimensional Variational Autoencoder model, using MNIST as a training dataset. Variational Autoencoder is based on the paper: "Auto-Encoding Variational Bayes", Kingma et. al. It should produce output similar to the following:
To begin, you'll need the latest version of Swift for
TensorFlow
installed. Make sure you've added the correct version of swift
to your path.
To train the model, run:
swift run -c release VariationalAutoencoder1D
- Reparamterization trick is internally implemented in the VAE model
- VAE model returns an
Array
ofTensor<Float>
tensors - which is inherently aDifferentiable
extension. (Reference: S4TF API Docs) - Loss Function combines
sigmoidCrossEntropy
of the output and KL Divergence between the intermediate representations.