Skip to content

Commit

Permalink
Update docs/source/usage_guides/accelerate_training.mdx
Browse files Browse the repository at this point in the history
Co-authored-by: regisss <[email protected]>
  • Loading branch information
jwieczorekhabana and regisss authored Sep 19, 2023
1 parent 6f25128 commit d668dfb
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/source/usage_guides/accelerate_training.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ To not take them into account in the computation of the throughput at the end of
## Mixed-Precision Training

Mixed-precision training enables to compute some operations using lighter data types to accelerate training.
Optimum Habana enables mixed precision training in a similar fasion as 🤗 Transofrmers:
Optimum Habana enables mixed precision training in a similar fashion as 🤗 Transformers:
- argument `--bf16` enables usage of PyTorch autocast
- argument `--half_precision_backend [hpu_amp, cpu_amp]` is used to specify a device on which mixed precision operations should be performed

Expand Down

0 comments on commit d668dfb

Please sign in to comment.