Onnxruntime memory(RAM) usage for CUDAExecutionProvider seems higher than CPUExecutionProvider #18934
Unanswered
durgaivelselvan-mn-17532
asked this question in
Other Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
If I create the
InferenceSession
for my ONNX model inCPUExecutionProvider
, the memory usage of the program seems to be minimal, but if I chooseCUDAExecutionProvider
as the provider, it uses more RAM thanCPUExecutionProvider
. Why is it so? Any particular reasons.Here is the code I've used to profile memory usage,
So, that means, for
CUDAExecutionProvider
, will there be a copy of CUDA nodes in RAM, along with the nodes assigned toCPUExecutionProvider
?Executed on Ubuntu 22.07 x86_64 with onnxruntime-gpu:1.13.1 and torch:2.0.0+cu117
Beta Was this translation helpful? Give feedback.
All reactions