Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCR model example using two stage pipeline #563

Draft
wants to merge 6 commits into
base: gen3
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
56 changes: 32 additions & 24 deletions gen3/neural-networks/advanced-examples/ocr/ocr/README.md
Original file line number Diff line number Diff line change
@@ -1,31 +1,39 @@
## Text Detection + Optical Character Recognition (OCR) Pipeline
# Overview
We provide here an example for running a two stage text detection and OCR pipeline. This example uses PaddlePaddle [text detection]() and [text recognition (OCR)](https://hub.luxonis.com/ai/models/9ae12b58-3551-49b1-af22-721ba4bcf269?view=page) models from HubAI Model ZOO. The example visualizes the recognized text on an adjacent white image in the locations in which it was detected. This example showcases how a twostage pipeline can easily be built using depthai.

This pipeline implements text detection (EAST) followed by optical character recognition of the detected text.

## Installation
**WARNING:** As of depthai alpha10 the example only works on OAK4 devices.

```
python3 -m pip install -r requirements.txt
```

## Usage

Run the application

```
python3 main.py
# Instalation
Running this example requires a **Luxonis OAK4 device** connected to your computer. You can find more information about the supported devices and the set up instructions in our [Documentation](https://rvc4.docs.luxonis.com/hardware).
Moreover, you need to prepare a **Python 3.10** environment with [DepthAI](https://pypi.org/project/depthai/) and [DepthAI Nodes](https://pypi.org/project/depthai-nodes/) packages installed. You can do this by running:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where does py3.10 dependency come from since depthai and depthai-nodes should both work also with 3.8?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left it the same as we have in the general README

```bash
pip install -r requirements.txt
```

## Example Results

Upon running, you can point this at text of interest to get the detection and the resultant text in the detected areas (and the locations of the text in pixel space).

Note that the more text in the frame, the slower the network will run - as it is running OCR on every detected region of text.
And you will see this variance in the examples below:


[![Text Detection + OCR on DepthAI](https://user-images.githubusercontent.com/32992551/105749743-13febe00-5f01-11eb-8b5f-dca801f5d125.png)]

[![Text Detection + OCR on DepthAI](https://user-images.githubusercontent.com/32992551/105749667-f6315900-5f00-11eb-92bd-a297590adedc.png)]
# Usage
The inference is ran using a simple CLI call:
```bash
python3 main.py \
--device ... \
--media ...
```

[![Text Detection + OCR on DepthAI](https://user-images.githubusercontent.com/32992551/105749638-eb76c400-5f00-11eb-8e9a-18e550b35ae4.png)]
The relevant arguments:
- **--device** [OPTIONAL]: DeviceID or IP of the camera to connect to.
By default, the first locally available device is used;
- **--media** [OPTIONAL]: Path to the media file to be used as input.
Currently, only video files are supported but we plan to add support for more formats (e.g. images) in the future.
By default, camera input is used;

Running the script downloads the model, creates a DepthAI pipeline, infers on camera input or the provided media, and display the results by **DepthAI visualizer**
The latter runs in the browser at `http://localhost:8082`.
In case of a different client, replace `localhost` with the correct hostname.

## Example
To run the example you can simply run the following command:
```bash
python3 main.py \
-d <<device ip / mxid>>
```

This file was deleted.

This file was deleted.

Loading