slow inference speed #1147
Replies: 1 comment 2 replies
-
Hi @akshitasax, One more optimization you could try is to set the filters rate to 1.5 on the centered instance model to get a solid reduction in parameter count on that model, but there's probably a couple more sources of performance hits:
If (1) accounts for most of the slowdown, there might be some optimizations we can do by running the tracking separately -- just let us know and we can work with you on it. If (2) accounts for most of the slowdown, then there's not much to be done if the data is just slow to read, but if it's an issue with video decoding performance, we're currently working on trying a new backend for the video reading which should hopefully be a lot faster. Let us know what the initial investigation turns up and we can keep troubleshooting from there. Cheers, Talmo |
Beta Was this translation helpful? Give feedback.
-
Hi,
Firstly, thank you for all the help in the past!
Currently, I want to track 2 flies over 12 hour videos using the top-down model pipeline. However, the inference time is close to ~5 hours for one 12 hour video (res 640x480, roughly 646,000 frames) and the prediction rate is about 30 FPS. I remember seeing higher speeds in the SLEAP Nature paper. Our GPU model is NVIDIA RTX A4000. What do you think I can change or do to improve inference speeds?
I reviewed previous discussion threads about similar issues (#1099 #1126 ) - but the suggested settings for max stride, output stride, and filters are already being used in my model configuration. I am also not using KF. I'm attaching screenshots of the configuration below.
Could it be an issue with my GPU? I am not entirely sure how to diagnose this, but I'm attaching a couple of screenshots from the CLI that I thought might help.
Thanks,
Akshita
Beta Was this translation helpful? Give feedback.
All reactions