-
I am trying to use the
My code: import stable_whisper
model = stable_whisper.load_faster_whisper('large-v2', device="cuda", compute_type="int8_float16")
result = model.transcribe_stable(movie_path, **kwargs)
model.refine(movie_path, result) Am I missing something? It appears that the Thank you! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
That error is expected because |
Beta Was this translation helpful? Give feedback.
That error is expected because
refine
is not implemented for the faster-whisper models. For more faster refinement, an alternative to userefine
with a smaller model than the one used for the transcription (e.g.medium
if transcription was done withlarge
).