Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Snaps-events experiment #559

Draft
wants to merge 1 commit into
base: gen3
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions gen3/neural-networks/advanced-examples/snaps-events/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Overview
We provide here an example for running a snap event with any detection model. If the detection confidence is above a threshold, the RGB frame with that detection is uploaded to your Hub account.

# Instalation
Running this example requires a **Luxonis device** connected to your computer. You can find more information about the supported devices and the set up instructions in our [Documentation](https://rvc4.docs.luxonis.com/hardware).
Moreover, you need to prepare a **Python 3.10** environment with the following packages installed:
- [DepthAI](https://pypi.org/project/depthai/),
- [DepthAI Nodes](https://pypi.org/project/depthai-nodes/).

## DepthAI
As **DepthAI v3** is not officially released yet, you need to install it from the artifacts:
```bash
python3 -m pip install --extra-index-url https://artifacts.luxonis.com/artifactory/luxonis-python-release-local/ depthai==3.0.0a10"
```

# Usage
The inference is ran using a simple CLI call:
```bash
python3 main.py \
--model_slug ... \
--device ... \
--annotation_mode ... \
--fps_limit ... \
--media ...
```

The relevant arguments:
- **--model_slug**: A unique HubAI identifier of the model;
- **--device** [OPTIONAL]: DeviceID or IP of the camera to connect to.
By default, the first locally available device is used;
- **--fps_limit** [OPTIONAL]: The upper limit for camera captures in frames per second (FPS).
The limit is not used when infering on media.
By default, the FPS is not limited.
If using OAK-D Lite, make sure to set it under 28.5;
- **--media** [OPTIONAL]: Path to the media file to be used as input.
Currently, only video files are supported but we plan to add support for more formats (e.g. images) in the future.
By default, camera input is used;

## Example
To try it out, let's run a simple YOLOv6 object detection model on your camera input.
```bash
python3 main.py \
--model_slug luxonis/yolov6-nano:r2-coco-512x288
```
62 changes: 62 additions & 0 deletions gen3/neural-networks/advanced-examples/snaps-events/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
import time
from pathlib import Path
import depthai as dai
from depthai_nodes import ParsingNeuralNetwork
from utils.snaps_producer import SnapsProducer
from utils.arguments import initialize_argparser

_, args = initialize_argparser()

if args.fps_limit and args.media_path:
args.fps_limit = None
print(
"WARNING: FPS limit is set but media path is provided. FPS limit will be ignored."
)

visualizer = dai.RemoteConnection(httpPort=8082)
device = dai.Device(dai.DeviceInfo(args.device)) if args.device else dai.Device()

with dai.Pipeline(device) as pipeline:
print("Creating pipeline...")

model_description = dai.NNModelDescription(args.model_slug)
platform = pipeline.getDefaultDevice().getPlatformAsString()
model_description.platform = platform
nn_archive = dai.NNArchive(dai.getModelFromZoo(model_description))

if args.media_path:
replay = pipeline.create(dai.node.ReplayVideo)
replay.setReplayVideoFile(Path(args.media_path))
replay.setOutFrameType(dai.ImgFrame.Type.NV12)
replay.setLoop(True)
imageManip = pipeline.create(dai.node.ImageManipV2)
imageManip.setMaxOutputFrameSize(
nn_archive.getInputWidth() * nn_archive.getInputHeight() * 3
)
imageManip.initialConfig.addResize(
nn_archive.getInputWidth(), nn_archive.getInputHeight()
)
imageManip.initialConfig.setFrameType(dai.ImgFrame.Type.BGR888p)
if platform == "RVC4":
imageManip.initialConfig.setFrameType(dai.ImgFrame.Type.BGR888i)
replay.out.link(imageManip.inputImage)

input_node = (
imageManip.out if args.media_path else pipeline.create(dai.node.Camera).build()
)

nn_with_parser = pipeline.create(ParsingNeuralNetwork).build(
input_node, nn_archive, fps=args.fps_limit
)

snaps_producer = pipeline.create(SnapsProducer)
nn_with_parser.passthrough.link(snaps_producer.rgb_frame)
nn_with_parser.out.link(snaps_producer.nn_output)

print("Pipeline created.")

pipeline.start()
visualizer.registerPipeline(pipeline)

while pipeline.isRunning():
time.sleep(1 / 30)
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
import argparse
from typing import Tuple
import os


def initialize_argparser():
"""Initialize the argument parser for the script."""
parser = argparse.ArgumentParser()
parser.description = "General example script to run any model available in HubAI on DepthAI device. \
All you need is a model slug of the model and the script will download the model from HubAI and create \
the whole pipeline with visualizations. You also need a DepthAI device connected to your computer. \
If using OAK-D Lite, please set the FPS limit to 28."

parser.add_argument(
"-m",
"--model_slug",
help="Slug of the model copied from HubAI.",
required=True,
type=str,
)

parser.add_argument(
"-d",
"--device",
help="Optional name, DeviceID or IP of the camera to connect to.",
required=False,
default=None,
type=str,
)

parser.add_argument(
"-fps",
"--fps_limit",
help="FPS limit for the model runtime.",
required=False,
default=None,
type=int,
)

parser.add_argument(
"-media",
"--media_path",
help="Path to the media file you aim to run the model on. If not set, the model will run on the camera input.",
required=False,
default=None,
type=str,
)
args = parser.parse_args()

return parser, args
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
import depthai as dai
import time
import os

class SnapsProducer(dai.node.ThreadedHostNode):

def __init__(self):
super().__init__()

self.rgb_frame: dai.Node.Output = None
self.nn_output: dai.Node.Output = None
self.confidence_threshold: float = 0.6
self.time_interval: float = 60.0
self.last_update = time.time()

self.events_manager = dai.EventsManager()
self.events_manager.setLogResponse(True)
if os.getenv("DEPTHAI_HUB_URL") is not None:
self.events_manager.setUrl(os.getenv("DEPTHAI_HUB_URL"))

def build(self,
confidence_threshold: float = 0.6,
time_interval: float = 60.0) -> None:

self.confidence_threshold = confidence_threshold
self.time_interval: float = time_interval
self.last_update = time.time()

return self

def run(self):

while self.isRunning():
rgb_frame = self.rgb_frame.get()
nn_output = self.nn_output.get()

for det in nn_output.detections:
if det.confidence < self.confidence_threshold and time.time() > self.last_update + self.time_interval:
self.last_update = time.time()
print("----------------- EVENT SENT -----------------")
self.events_manager.sendSnap("rgb", rgb_frame, [], ["demo"], {"model": "cup-models"})