Added file-saving option for live transcription. #18
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I asked gpt-4 for this feature =)
It seems like you would like the transcriptions to be written to the output file as they are being produced, rather than waiting until the end of the transcription process. Below is a modification of your script that includes this feature. This was achieved by opening the output file in 'append' mode ('a') and writing each line of transcription to the file as they are produced.
In this code, an output file is opened in append mode before the main loop starts if an output path is specified. Each transcription is then written to the output file immediately as they are generated. It's important to call
output_file.flush()
after writing each transcription to ensure that it is actually written to the file immediately, as Python's file output is buffered by default. This buffer is cleared (meaning the data is actually written to the file) whenflush()
is called. Finally, the output file is closed after the main loop finishes.Please note that the transcription data will be written to the output file every time a new transcription is generated. This means that the output file will be updated quite frequently, which could potentially result in slower performance if the transcriptions are being generated very quickly.