DroniMal is an intelligent video analytics pipeline powered by Nvidia's Jetson nano and DeepStream. This application aims to help those who work day in and day out to save animals from being poached. Every year, on an average, 38 million animals are being poached resulting in many species almost near to extinction or they now come under endangered species.
African elephants are few such animals which are being poached the most and hence fall under Endangered species as per IUCN (International Union for Conservation of Nature) Red List. Even giraffes have become vulnerable and come under high risk of extinction by IUCN. Hence, DroniMal helps keep an eye on elephants and giraffes, help monitor their movement in the wild and also monitor their migration to protect them in every way possible. DroniMal would be best fit when used in wild where we would expect only animals and no human/human object. Hence, DroniMal can detect vehicles too, indicating danger to the wild animals.
Such applications can help all the wildlife enthusiasts to keep an eye and protect their lovely animals in the wild.
Currently the demo shown, fetches 4 input streams locally and performs detection parallely. On an average, we get 4 FPS per stream. The model used is YoloV5s V6 which was then converted to TensorRT engine. Then the TensorRT engine was used to perform inference using Nvidia's DeepStream. The device used for the demos is Nvidia's Jetson Nano B01.
Please checkout references for link of the dataset.
Demo Link : YouTube
We will be using Jetpack 4.5 and NOT Jetpack 4.6 (has TensorRT 7) as 4.5 comes with TensorRT 6 which is supported by current implementation of YoloV5 models.
- Set up your Jetson by following the steps here : Installing Jetpack
- Increase swap size : Video
- To install DeepStream, follow this documentation
- Now, clone this repo and paste it in deepstream/sources : cp -r ./deepstream_yolo_wildlife /opt/nvidia/deepstream/deepstream-5.1/sources/
- Run this to compile the lib : CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
- Now, inside deepstream_yolo_wildlife, run this command deepstream-app -c deepstream_app_config.txt
If you are interested in converting your custom trained YoloV5 model to TensorRT and run it using DeepStream, follow this blog to DIY.
Dataset : Lila - Conservation Drones
Object detection model : YoloV5
TensorRT : YoloV5
DeepStream : 5.1
This work can be extended by adding more classes to detect. One can also add tracking mechanism to help track the objects across multiple videos. Further more, a Jetson nano can be mounted on the drone and can help perform video analytics live!