Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to reduce memory usage? #107

Open
dedoogong opened this issue Nov 20, 2024 · 1 comment
Open

how to reduce memory usage? #107

dedoogong opened this issue Nov 20, 2024 · 1 comment
Assignees

Comments

@dedoogong
Copy link

dedoogong commented Nov 20, 2024

Hello!
I converted Segmennter(ViT-Tiny) and Depth Anything(ViT-Small) and the size of both onnx, trt files are under 30 MB. I got the compiled trt engine using onnx_ptq code.
When I load the compiled small trt engine, the GPU memory usage is increased to almost 24 GB.
Original torch model uses just around 2 GB GPU memory.

Plus, so oftenly, I can't run PTQ like entropy or minmax for int8 using 512x512 size images and I had to always reduce the image size to 224x224 or 256x256 to avoid OOM during PTQ. it also seems to be related!

Why this happens?
How to avoid it?
The inference speed is increased 3~4 times and accuracy was dropped slightly. So only the ultra hige memory usage problem is the only thing to solve now.
If someone know how to handle it, please help me!

Thank you!

@dedoogong
Copy link
Author

dedoogong commented Nov 25, 2024

I found
only "from modelopt.torch._deploy._runtime import RuntimeRegistry" takes 20GB GPU!
and I debugged further and then I found
right after the debug pointer passes AWQClipHelper() of int4.py in onnx.quantization, it takes 20GB suddenly!
why?
how to solve it?
I even use int8 PTQ! not int4!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants