Skip to content

Commit

Permalink
Update notebooks/whisper-quantized-example.ipynb
Browse files Browse the repository at this point in the history
Co-authored-by: JaynieP <[email protected]>
  • Loading branch information
jimypbr and jayniep-gc authored Jul 27, 2023
1 parent e936ee8 commit 77afdd1
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion notebooks/whisper-quantized-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -397,7 +397,7 @@
"To access the multilingual models, remove the `.en` from the checkpoint name. Note however that the multilingual models are slightly less accurate for this English transcription task but they can be used for transcribing other languages or for translating to English. The largest model `whisper-large` has 1550M parameters and requires a 4-IPUs pipeline.\n",
"You can try it by setting `select_whisper_config(\"large\")`\n",
"\n",
"You can also try using beam search by setting the `num_beams` parameter in the calls `parallelize` and `generate` above. `whisper-small` will fit on 1 IPU with `num_beams=5`.\n",
"You can also try using beam search by setting `num_beams>1` in the calls to `parallelize` and `generate` above. `whisper-small` will fit on 1 IPU with `num_beams=5`.\n",
"\n",
"For `whisper-medium` with `num_beams>1` the model will need 4 IPUs to fit. For `whisper-large` with `num_beams>1` you will need more than the IPU-POD4. On Paperspace, this is available using either an IPU-POD16 or a Bow Pod16 machine. Please contact Graphcore if you need assistance running these larger models.\n"
]
Expand Down

0 comments on commit 77afdd1

Please sign in to comment.