As AI engineers, we love data and we love to see graphs and numbers! So why not project the inference data on some platform to understand the inference better? When a model is deployed on the edge for some kind of monitoring, it takes up rigorous amount of frontend and backend developement apart from deep learning efforts — from getting the live data to displaying the correct output. So, I wanted to replicate a small scale video analytics tool and understand what all feature would be useful for such a tool and what could be the limitations?
dashboard_1_local_video.mp4
For detailed insights, do check out my Medium Blog
- Choose input source - Local, RTSP or Webcam
- Input class threshold
- Set FPS drop warning threshold
- Option to save inference video
- Input class confidence for drift detection
- Option to save poor performing frames
- Display objects in current frame
- Display total detected objects so far
- Display System stats - Ram, CPU and GPU usage
- Display poor performing class
- Display minimum and maximum FPS recorded during inference
- Clone this repo
- Install all the dependencies
- Download deepsort checkpoint file and paste it in deep_sort_pytorch/deep_sort/deep/checkpoint
- Run -> streamlit run app.py
- Updated yolov5s weight file name in detect() in app.py
- Added drive link to download DeepSort checkpoint file (45Mb).
Do checkout the Medium article and give this repo a ⭐
The input video should be in same folder where app.py is. If you want to deploy the app in cloud and use it as a webapp then - download the user uploaded video to temporary folder and pass the path and video name to the respective function in app.py . This is Streamlit bug. Check Stackoverflow.