English | 简体中文
This document provides a list of all validated models that are supported by OpenVINO™ integration with TensorFlow. This list is continuously evolving as we enable more operators and models.
Model Name | Supported Devices |
---|---|
Inception V3 | CPU, iGPU, MYRIAD, VAD-M |
Inception_V4 | CPU, iGPU, MYRIAD, VAD-M |
Resnet V1 50 | CPU, iGPU, MYRIAD, VAD-M |
Resnet V2 152 | CPU, iGPU, MYRIAD, VAD-M |
Resnet V2 50 | CPU, iGPU, MYRIAD, VAD-M |
VGG 16 | CPU, iGPU, MYRIAD, VAD-M |
VGG 19 | CPU, iGPU, MYRIAD, VAD-M |
MobileNet_v1_1.0_224 | CPU, iGPU, MYRIAD, VAD-M |
MobileNet_v2_1.4_224 | CPU, iGPU, MYRIAD, VAD-M |
CifarNet | CPU, iGPU, MYRIAD, VAD-M |
LeNet | CPU, iGPU, MYRIAD, VAD-M |
The links to the TensorFlow-Slim models include the pre-trained checkpoint files only. You should refer to TensorFlow-Slim models instructions page to run inference or freeze the models. (No pre-trained checkpoint files provided for CifarNet and LeNet.)
Model Name | Supported Devices |
---|---|
faster_rcnn_inception_resnet_v2_atrous_coco | CPU, iGPU, MYRIAD, VAD-M |
faster_rcnn_inception_v2_coco | CPU, iGPU, MYRIAD, VAD-M |
faster_rcnn_resnet50_coco | CPU, iGPU, MYRIAD, VAD-M |
faster_rcnn_resnet101_coco | CPU, iGPU, MYRIAD, VAD-M |
faster_rcnn_resnet50_lowproposals_coco | CPU, iGPU, MYRIAD, VAD-M |
ssd_inception_v2 | CPU, iGPU, MYRIAD, VAD-M |
ssd_mobilenet_v1 | CPU, iGPU, MYRIAD, VAD-M |
ssd_mobilenet_v1_fpn | CPU, iGPU, MYRIAD, VAD-M |
ssd_mobilenet_v2 | CPU, iGPU, MYRIAD, VAD-M |
ssd_resnet_50_fpn | CPU, iGPU, MYRIAD, VAD-M |
ssdlite_mobilenet_v2 | CPU, iGPU, MYRIAD, VAD-M |
mask_rcnn_inception_resnet_v2_atrous_coco | CPU, iGPU, MYRIAD, VAD-M |
mask_rcnn_inception_v2_coco | CPU, iGPU, MYRIAD, VAD-M |
Pre-trained frozen models are provided for these models.
Model Name | Supported Devices |
---|---|
DenseNet121 | CPU, iGPU, MYRIAD, VAD-M |
DenseNet169 | CPU, iGPU, MYRIAD, VAD-M |
DenseNet201 | CPU, iGPU, MYRIAD, VAD-M |
EfficientnetB0 | CPU, iGPU, MYRIAD, VAD-M |
EfficientnetB1 | CPU, iGPU, MYRIAD, VAD-M |
EfficientnetB2 | CPU, iGPU, MYRIAD, VAD-M |
EfficientnetB3 | CPU, iGPU, MYRIAD, VAD-M |
EfficientnetB4 | CPU, iGPU, MYRIAD, VAD-M |
EfficientnetB5 | CPU, iGPU, MYRIAD, VAD-M |
EfficientnetB6 | CPU, iGPU, MYRIAD, VAD-M |
EfficientnetB7 | CPU, iGPU, MYRIAD, VAD-M |
InceptionV3 | CPU, iGPU, MYRIAD, VAD-M |
NASNetLarge | CPU, iGPU, MYRIAD, VAD-M |
NASNetMobile | CPU, iGPU, MYRIAD, VAD-M |
ResNet50v2 | CPU, iGPU, MYRIAD, VAD-M |
Please follow the instructions on Keras Applications page for further information about using these pre-trained models.
Pre-trained frozen model files are provided for only some of these models. For the rest, please refer to the links provided.
OpenVINO™ integration with TensorFlow now supports INT8 models quantized using Quantization-Aware Training (QAT) tools such as OpenVINO™ Neural Network Compression Framework (NNCF) and TensorFlow Model Optimization ToolKit (TFMOT). This support is currently in a preview state and performance optimizations are in progress.
Some examples of NNCF usage to produce quantized models can be found here.
Some quantized models are shown to provide more optimized performance by setting the environment variable 'OPENVINO_TF_CONSTANT_FOLDING' to 1 before running inference.
[Note: The latest supported TensorFlow versions for NNCF and OpenVINO™ integration with TensorFlow may be different. It is advised that the users create a separate virtual environment for quantizing the models with NNCF to avoid any TensorFlow version incompatability issues. The quantized models can then be run in the environment that is compatible with OpenVINO™ integration with TensorFlow. NNCF compatible with TensorFlow version 2.4.2 is validated with OpenVINO™ integration with TensorFlow compatible with TensorFlow version 2.5.1.]