Monitor memory usage of TensorFlow models #2156
Labels
stale
This label marks the issue/pr stale - to be closed automatically if no activity
stat:awaiting response
type:feature
We want to monitor memory usage of TensorFlow serving runtime on a per model basis. Currently we can get the memory used by Tensorflow completely but we dont have a way to get this information on a per model basis. We would like to have a solution that can work for both CPU and GPU memory
No alternatives are available that we know of
The text was updated successfully, but these errors were encountered: