You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I encountered a strange problem: comfyanonymous/ComfyUI#2823
I was unable to install onnxruntime without showing errors during execution. And execution with errors (for details, open the link above) leads to a significant slowdown in execution. (for example "rembg" node from "WAS node suite")
CPU (no errors in console): 2.51 seconds
['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying: 7.5 seconds
Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32: >40 seconds
I no longer know what can be done to get rid of the errors. And does this make sense? Will everything eventually work faster if there are no errors?
python_embeded info:
Python 3.11.8, Torch 2.2.1+cu121, CUDA 12.1
ep:TensorRTissues related to TensorRT execution providerperformanceissues related to performance regressions
1 participant
Heading
Bold
Italic
Quote
Code
Link
Numbered list
Unordered list
Task list
Attach files
Mention
Reference
Menu
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I encountered a strange problem: comfyanonymous/ComfyUI#2823
I was unable to install onnxruntime without showing errors during execution. And execution with errors (for details, open the link above) leads to a significant slowdown in execution. (for example "rembg" node from "WAS node suite")
CPU (no errors in console):
2.51 seconds
['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying
:7.5 seconds
Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32
:>40 seconds
I no longer know what can be done to get rid of the errors. And does this make sense? Will everything eventually work faster if there are no errors?
python_embeded info:
Python 3.11.8, Torch 2.2.1+cu121, CUDA 12.1
Beta Was this translation helpful? Give feedback.
All reactions