-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory usage of TensorFlow models #2173
Comments
@singhniraj08 Any pointers on above? |
Hey @singhniraj08 Can you point us to any workarounds in the meantime? |
@ndeepesh, Can you try using Memory profiling tool if that helps. Since the models are running online on c++ model servers, tracking memory usage becomes difficult in that case. I am bumping up this issue internally for solution and we will update this thread. Thank you! |
Thanks @singhniraj08. Without a good estimate this is causing regressions in our hosts where we are going OOM easily with no way to tell which model is the issue |
@singhniraj08 Will the memory profile tool gets populated for profiling on CPUs? I havent see this getting populated on cpu profiles |
@ndeepesh, Memory profiling tools monitors the memory usage of your device during the profiling interval. If you are looking for CPU usage while running TF Serving, Profiling inference requests with Tensorboard may help you achieve that. Thanks. |
@singhniraj08 This only gives us what tensorflow op is taking how much time right? Not the amount of memory it occupies on CPU. |
We want to monitor memory usage of TensorFlow serving runtime on a per model basis. Currently we can get the memory used by Tensorflow completely but we dont have a way to get this information on a per model basis. We would like to have a solution that can work for both CPU and GPU memory
No alternatives are available that we know of
This is a similar issue to this one - #2156 where I wanted to follow up on whether there are any solutions for CPUs
The text was updated successfully, but these errors were encountered: