-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Combination of fast_rnnt and fast_emit #12
Comments
Sorry for the late reply, we have a discussion in k2 repository (k2-fsa/k2#955) and we are doing experiments, finally we will add something to make symbols emit earier. |
@Butterfly-c The delay_penalty in k2-fsa/k2#955 has been merged, you can try it. It behaves as good as fast_emit. If you really want the fast_emit, you can try k2-fsa/k2#1069 (by installing k2 or modifying fast_rnnt according to this PR). |
help help help |
Please post, all, all, of the logs. The screenshot contains little information and it is hard for us to figure out what bad things happened. |
please first use pip install cmake and then re-try. |
Please post, all, all, of the logs. The screenshot contains little information and it is hard for us to figure out what bad things happened. |
From the error log, you have not installed cuDNN yet. Also, your installed CUDA is 9.2, which is quite old. I am not sure whether it works. |
You have network connection problems with github.com. Please re-try. |
From the above logs, I suggest installing a CUDA version of PyTorch. Otherwise, it would be slow in training with CPU. |
From the output of cmake
|
How did you compile fast_rnnt? Do you have the compilation logs? The error logs show that you have compiled a CPU version of fast_rnnt. |
Is there any version that take advantage of fast_emit?
The text was updated successfully, but these errors were encountered: