Skip to content

Latest commit

 

History

History
 
 

custom_operations

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Collection of Custom Operations using OpenVINO Extensibility Mechanism

This module provides a guide and the implementation of a few custom operations in the Intel OpenVINO runtime using its Extensibility Mechanism.

There are some use cases when OpenVINO Custom Operations could be applicable:

  • There is an ONNX model which contains an operation not supported by OpenVINO.
  • You have a PyTorch model, which could be converted to ONNX, with an operation not supported by OpenVINO.
  • You want to replace a subgraph for ONNX model with one custom operation which would be supported by OpenVINO.

More specifically, here we implement custom OpenVINO operations that add support for the following native PyTorch operation:

Also, it contains the conversion extension translate_sentencepiece_tokenizer and the operation extension SentencepieceTokenizer to add support for the tokenization part from TensorFlow universal-sentence-encoder-multilingual model. The conversion extension changes the input format of the model. So the custom operation SentencepieceTokenizer expects 1D string tensor packed into the bitstream of the specific format. For more information about the format, check the code for SentencepieceTokenizer.

And other custom operations introduced by third-party frameworks:

You can find more information about how to create and use OpenVINO Extensions to facilitate mapping of custom operations from framework model representation to OpenVINO representation here.

Build custom OpenVINO operation extension library

The C++ code implementing the custom operation is in the user_ie_extensions directory. You'll have to build an "extension library" from this code so that it can be loaded at runtime. The steps below describe the build process:

  1. Install OpenVINO Runtime for C++.

  2. Build the library:

cd user_ie_extensions
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release && cmake --build . --parallel 4

If you need to build only some operations specify them with the -DCUSTOM_OPERATIONS option:

cmake .. -DCMAKE_BUILD_TYPE=Release -DCUSTOM_OPERATIONS="complex_mul;fft"
  • Please note that OpenCV installation is required to build an extension for the fft operation. Other extentions still can be built without OpenCV.

You also could build the extension library while building OpenVINO.

Load and use custom OpenVINO operation extension library

You can use the custom OpenVINO operations implementation by loading it into the OpenVINO Core object at runtime. Then, load the model from the ONNX file with the read_model() API. Here's how to do that in Python:

from openvino.runtime import Core

# Create Core and register user extension
core = Core()
core.add_extension('/path/to/libuser_ov_extensions.so')

# Load model from .onnx file directly
model = core.read_model('model.onnx')
compiled_model = core.compile_model(model, 'CPU')

You also can get OpenVINO IR model with Model Optimizer, just use extra --extension flag to specify a path to custom extensions:

mo --input_model model.onnx --extension /path/to/libuser_ov_extensions.so