Why does onnx model give floating point exception on GPU and not on CPU? #15263
Unanswered
Avenge-PRC777
asked this question in
Other Q&A
Replies: 1 comment
-
Please file this as an issue and follow the issue template when doing so. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have an onnx model created out of a pytorch based model. When I run it on a black box system using:
I create InferenceSession using CUDAExecutionProvider
I am seeing the following results:
If the black box system is CPU based (32GB memory), the code works fine
If the black box system is GPU based (V100/T4; 16GB memory), the code fails at certain inputs with floating point exception at the above line
What is even more curious to me is if I put a high level try except block, it does not capture the floating point exception and the code fails.
I use the following code to convert model to onnx:
How can I debug this or handle the FPE error?
Beta Was this translation helpful? Give feedback.
All reactions