This demo shows how to run a MobileNetV3 Large PaddePaddle model, using OpenVINO Runtime. Instead of exporting the PaddlePaddle model to ONNX and converting to OpenVINO Intermediate Representation (OpenVINO IR) format by using Model Optimizer, you can now read the Paddle model directly without conversion.
This tutorial also covers new features in OpenVINO 2022.1, including:
- Preprocessing API
- Directly Loading a PaddlePaddle Model
- Auto-Device Plugin
- AsyncInferQueue Python API
- Performance Hints
- LATENCY Mode
- THROUGHPUT Mode
If you have not installed all required dependencies, follow the Installation Guide.