Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding GPU automatic mixed precision training #384

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

vinhngx
Copy link

@vinhngx vinhngx commented Aug 16, 2019

Automatic Mixed Precision training on GPU for TensorFlow has been recently introduced:

https://medium.com/tensorflow/automatic-mixed-precision-in-tensorflow-for-faster-ai-training-on-nvidia-gpus-6033234b2540

Automatic mixed precision training makes use of both FP32 and FP16 precisions where appropriate. FP16 operations can leverage the Tensor cores on NVIDIA GPUs (Volta, Turing or newer architectures) for improved throughput. Mixed precision training also often allows larger batch sizes.

This PR adds GPU automatic mixed precision training to tensorflow-wavenet via passing the flags value --auto_mixed_precision=True.

python train.py --data_dir=/path/to/data/ --auto_mixed_precision=True

To learn more about mixed precision and how it works:

Overview of Automatic Mixed Precision for Deep Learning
NVIDIA Mixed Precision Training Documentation
NVIDIA Deep Learning Performance Guide

@vinhngx vinhngx requested a review from ibab February 7, 2020 03:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant