You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To prepare phone based lang, I see generate_unique_lexicon.py is used in almost every Chinese ASR eg(e.g. aishell-*), but it's not in English ASR(e.g. gigaspeech, librispeech), what's the reason?
I want to use k2.ctc_loss to process multi-pronunciation transcription problem in Chinese ASR, just like the English corpus, in which no special process to make the lexicon unique, is that more accurate than unique_lexicon?
The text was updated successfully, but these errors were encountered:
To prepare phone based lang, I see generate_unique_lexicon.py is used in almost every Chinese ASR eg(e.g. aishell-*), but it's not in English ASR(e.g. gigaspeech, librispeech), what's the reason?
I want to use k2.ctc_loss to process multi-pronunciation transcription problem in Chinese ASR, just like the English corpus, in which no special process to make the lexicon unique, is that more accurate than unique_lexicon?
The text was updated successfully, but these errors were encountered: