-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: Why multi-threading in inference code? #3
Comments
Thank you for your question, it's very valuable. When waiting for file I/O operations, the GPU may become idle, leading to throttling. If asynchronous file I/O is performed, it can ensure that the GPU stays busy at all times, preventing throttling and ultimately maintaining higher GPU inference speeds. |
@modricwang Thank you for the response!!
Thank you very much. |
Hi @KeondoPark , Thanks for your follow-up questions! For Question 1: For Question 2: I hope this provides some clarity. If you have any more questions, please feel free to ask. |
Thank you for your valuable response. |
This problem is not common, we can normally update using jtop. I suspect it might be due to insufficient RAM on the Nano, we are using a 4GB version of the board. Perhaps you could try increasing the swap space? |
Aha, we tested on Jetson Nano 2GB, which is the recommended platform by the organizer. Yes I agree with you that this is associated with insufficient RAM. Maybe I need to test on Jeston 4GB as well. Thank you for your kind responses. I could learn a lot from this discussion. |
Happy to help. If you need more communication, feel free to open this issue again~ |
Hi, Congratulations on 1st rank on LPCV challenges.
I reviewed your solution, and noticed that you guys used multi-threading when post-processing the prediction results in the inference code.
I guess this wouldn't have any effect on the inference time, because the time is only measured for model inference, but maybe I am wrong.
If you have any specific reason for using multi-threading, could you please explain?
Thanks
Keondo
The text was updated successfully, but these errors were encountered: