-
Notifications
You must be signed in to change notification settings - Fork 456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why shouldn't ONNX be set to True? #478
Comments
Thanks for reporting this bug! |
I verified that the code is working fine. Thanks for your help. I have another question. When I change 'repo_or_dir' to 'snakers4/silero-vad:v4.0', I get the following error. Is this a bug? ValueError: Required inputs (['state']) are missing from input feed (['input', 'h', 'c', 'sr']). |
It is related to the following issue: |
Ok. I'm asking because I'm getting an error while doing several tests. I'm currently working on Windows 10 and when I use GPU and ONNX at the same time, I get the following error: ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...) |
ONNX model has some restictions on Windows 10 |
Lines 20 to 23 in a395853
I believe we added these lines, because latest VAD versions were not compatible with ONNX GPU, right? |
In any case the VAD is not supposed to be run on GPU. |
Let this issue remain as a reminder how to set different executors for ONNX, but I believe the VAD by design should not be run on GPU. In any case, if for some reason running on GPU is imperative, just fork the above lines. |
When I set ONNX to False, it works fine. However, when I set it to True, the following error appears.
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(int32)) , expected: (tensor(int64))
The text was updated successfully, but these errors were encountered: