Skip to content

Commit

Permalink
Tokenizer fix decode (openvinotoolkit#767)
Browse files Browse the repository at this point in the history
* Added string tensor implementation with explicit pointer unpack

* Started to migrate to extension-only support of string operations with and without string support in OV core. Moved StringTensorUnpack and reworked it to be aligned with the new approach. Reworked sentece piece op and translation code to be compatible with several variants of string tensor representation and the plugin wrapping hack.

* Started to merge string/tokenizer related stuff from a dedicated OV branch to contrib in form compatible with both master and the branch with string tensors support. Added CaseFoldUTF8 from that branch.

* Rename CaseFoldUTF8 to name from opset proposal: CaseFold, added NormalizeUnicode

* Added a stub for RegexNormalization operation, WA for CPU bug with empty constants, register StringTensorPack and StringTensorUnpack as OV operations to be able to read IRs with those operations

* Implemented Reshape for decomposed string tensors

* Added RaggedTensorPack, sophisticated stup for RegexSplit and overridden Const translator for TF to intercept string constants

* Fixes for both master and element::string branches of OpenVINO; better conditional compilation based on available features in OpenVINO

* Debug output of indices in RaggedTensorPack

* Implemented a stub for WordpieceTokenizer. Supported conversion of a combination of WordpieceTokenizeWithOffsets and LookupTableFindV2 from TensorFlow

* Disabled debug output

* Define default values for custom operations attributes to make attribute initialization optional (needed for core.make_node)

* Added fast_tokenizer lib to the build. Implemented CaseFold based on fast_tokenizer.

* Removed debug output

* Implemented RaggedToDense always in pad_right=true mode and with boolean mask extra output

* Provided real implementations for NormalizeUnicode, RegexNormalization and RegexSplit based on paddle fast_tokenizer lib. Limited implementation, not all of the features of ops and TF translated ops are implemented.

* Implemented WordpieceTokenizer with fast_tokenizer library

* Renamed behaviours to be verbs instead of adjectives

* Added modified version of HF tokenizer parser from Artur; implemented necessary steps to complete HF bert preprocessing conversion (not validated)

* Renamed apply_tokenizer to connect_tokeniser and removed obsolete handling of model name

* CombineSegments is implemented, used in HF converter. Stitching of tokenizer and main model is fixed partially (still produces topologically incorrect model)

* Fixed stitching of two models by connecting with names of inputs/outputs, now Bert and its tokenizer are connected together correctly

* WA for CPU bug with scalar inputs, correct truncation and dynamic padding, fix bugs for batches processing

* Fixed conversion of HF tokenizer if part of outputs are omitted. Disabled debug output

* Add BPE Tokenizer

* Add BytesToChars Node for BBPE

* Delete print

* Clip max value for max_length to int32

* Fix RegexNormalization and Splitter, Add Digits Splitter

* Bug fixes

* Add decoding step, BytesToChars refactoring

Has a bug with internal dimension for VocabNode

* Fix some regex bugs for byte-level splitter

* Fix bug with VocabDecoder shape

* Minor changes for natively supported strings

* Suppressed minor^Carnings about int32 -> unsigned implicit

* Restructured sentence_piece directory to tokenizer directory: split all ops, translators and helper into individual files. To build use tokenizer custom op name in cmake instead of sentence_piece.

* Add regex to detokenizer pipeline, all splitters have 5 inputs

* Add Caching for RegexNormalization

* Add Caching for RegexSplit

* Add Wordpiece Cache

* Add NodeFactory

* Fix regex nodes init

* Fix Wordpiece Cache

* Add BPE Cache

* Fix RegexNormalization

* Refactor CombineSegments and Padding

* Refactoring

* Clean-up commented code

* Sentencepiece Model Encoder from Transformers Tokenizer

* Add tests for tokenizers

* Add detokenizer for Sentencepiece models

* Update README.md

* Update README.md

* Update README.md

* OVTokenizer as python package

* Update README.md

* Add sentencepiece detokenizer test

* Unified interface for fast and sentencepiece tokenizers

* Add Full Pipeline example for Sentencepiece

Move greedy decoding pipeline from detokenizer to model

* Update third-party-programs.txt

* Add Constants

* Add CPP pack/unpack_strings functions

Refactor greedy decoding

* Move tests to tokenizer dir

* Fix import

* Fix imports

* Sort Imports

* Add Streaming Sentencepiece Decoder

* Change Authors

* Update modules/custom_operations/user_ie_extensions/tokenizer/utils.cpp

Co-authored-by: Zlobin Vladimir <[email protected]>

* Configure tests

* Skip Java Tests

* Add Regression Test

* Skip traceback

* Add Win64 Fast Tokenizer lib

* Fix WorkingDir

* Return TB

* Fix dependencies install

* Add byte tokens handling for sentencepiece

* Drop black, use ruff format instead

* Temp remove tokenizers from windows CI

* CI check

* Compile fast_tokenizers from source code

* Export pack_strings() and unpack_strings()

* Build tokenizer target on windows

* Add icu4c patch

* Added include dir to nlohmann headers

* Fixed compilation on ubuntu 18.04 arm64

* Fixed Windows

* Supported prebuild Fast Tokenizers on all platforms

* Add tiktoken support WIP

* Unskip java tests

* Fixed compilation with re2 on Windows

* Move unpack_strings(), create sepparate include dir

* openvino_extensions

* Fixed link stage on Windows

* i64 is default tokenizer output type

* Add support for more tiktoken tokenizers

* Check Azure CI

* Fix Azure Win CI

* Define python version for setupvars.bat

* Add support for tiktoken detokenizers

* Add ChatGLM tokenization support.

* Add ChatGLM detokenization and tests

* Add ChatGLM detokenization and tests

* Fix mac sha256

* Skip Lin Java Tests

* Add Mac Tokenziers Tests and Skip Mac Java Step

* Fix Mac SHA

* Del WA for CPU Bug

* Fix Mac CI Pipeline

* Change Mac CI

* Fixed compilation

* Add setupvars to mac CI

* Change detokenizer output type

* Fix SegFault on AddedTokens For BPE tokenizer

* Add SP Space handling for decoder

* Removed SHA for macOS x86_64

* More fixes

* Fixed macos

* Enabled tests

* Fixed warnings

* Use developer package

* Split build

* Update windows.yml

* Added missed IMPLEMENT_OPENVINO_EXTENSION_API

* Update windows.yml

* Update windows.yml

* Update windows.yml

* Update windows.yml

* Update .ci/azure/windows.yml

removed build of fast tokenizers

* Update windows.yml

* Update windows.yml

---------

Co-authored-by: Sergey Lyalin <[email protected]>
Co-authored-by: Artur Paniukov <[email protected]>
Co-authored-by: Artur Paniukov <[email protected]>
Co-authored-by: Zlobin Vladimir <[email protected]>
Co-authored-by: Andrei Kochin <[email protected]>
  • Loading branch information
6 people authored Nov 29, 2023
1 parent ebc2ea2 commit dcc05cb
Show file tree
Hide file tree
Showing 66 changed files with 6,385 additions and 359 deletions.
13 changes: 10 additions & 3 deletions .ci/azure/linux.yml
Original file line number Diff line number Diff line change
Expand Up @@ -170,14 +170,21 @@ jobs:
source .env3/bin/activate
python -m pip install --upgrade pip
python -m pip install -r $(REPO_DIR)/modules/custom_operations/tests/requirements.txt
cd ${OPENVINO_REPO_DIR}/tools && python -m pip install mo/
python -m pip install $(INSTALL_DIR)/tools/openvino-*.whl
python -m pip install $(OPENVINO_REPO_DIR)/tools/mo/
python -m pip install $(REPO_DIR)/modules/custom_operations/user_ie_extensions/tokenizer/python/.[all]
workingDirectory: $(WORK_DIR)
displayName: 'Create user custom operations env'
displayName: 'Create virtual env'
- script: |
. $(SETUPVARS)
source $(WORK_DIR)/.env3/bin/activate
# need to enable sparse_conv tests with new Open3D release
python -m pytest -k "not sparse_conv" tests/run_tests.py
workingDirectory: $(REPO_DIR)/modules/custom_operations
displayName: 'Custom user operation tests'
- script: |
source $(WORK_DIR)/.env3/bin/activate
python -m pytest --tb=no tokenizers_test.py
workingDirectory: $(REPO_DIR)/modules/custom_operations/user_ie_extensions/tokenizer/python/tests/
displayName: 'Tokenizers extension regression test'
30 changes: 29 additions & 1 deletion .ci/azure/mac.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ jobs:
# arm64:
# CMAKE_OSX_ARCHITECTURES: arm64
# About 200% of total time (perfomace of Mac hosts is unstable)
timeoutInMinutes: 180
timeoutInMinutes: 240

pool:
vmImage: 'macOS-11'
Expand All @@ -57,8 +57,18 @@ jobs:
BIN_DIR: $(OPENVINO_REPO_DIR)/bin/intel64/$(BUILD_TYPE)
INSTALL_DIR: $(WORK_DIR)/install_pkg
SETUPVARS: $(INSTALL_DIR)/setupvars.sh
CUSTOM_OP_LIB: $(BIN_DIR)/libuser_ov_extensions.dylib

steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.10.13'
addToPath: true
architecture: 'x64'
githubToken: $(auth_token)
displayName: Setup Python 3.10
name: setupPython

- script: |
whoami
uname -a
Expand Down Expand Up @@ -145,3 +155,21 @@ jobs:
workingDirectory: $(REPO_DIR)/modules/java_api
displayName: 'Java tests'
condition: eq(variables['CMAKE_OSX_ARCHITECTURES'], 'x86_64')
- script: |
python3 -m venv venv
source venv/bin/activate
python -m pip install --upgrade pip
python -m pip install -r $(REPO_DIR)/modules/custom_operations/tests/requirements.txt
python -m pip install $(OPENVINO_REPO_DIR)/tools/mo/
python -m pip install $(INSTALL_DIR)/tools/openvino-*.whl
python -m pip install $(REPO_DIR)/modules/custom_operations/user_ie_extensions/tokenizer/python/.[transformers]
workingDirectory: $(WORK_DIR)
displayName: 'Create virtual env'
- script: |
source $(WORK_DIR)/venv/bin/activate
python -m pytest --tb=no tokenizers_test.py
workingDirectory: $(REPO_DIR)/modules/custom_operations/user_ie_extensions/tokenizer/python/tests/
displayName: 'Tokenizers extension regression test'
condition: False
55 changes: 36 additions & 19 deletions .ci/azure/windows.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,21 +47,20 @@ jobs:
MODELS_PATH: $(REPO_DIR)\..\testdata
WORK_DIR: $(Pipeline.Workspace)\_w
BUILD_DIR: D:\build
BIN_DIR: $(OPENVINO_REPO_DIR)\bin\intel64\$(BUILD_TYPE)
BUILD_DIR_CONTRIB: D:\build_contrib
MSVS_VARS_PATH: C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Auxiliary\Build\vcvars64.bat
MSVC_COMPILER_PATH: C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Tools\MSVC\14.24.28314\bin\Hostx64\x64\cl.exe
INSTALL_DIR: $(WORK_DIR)\install_pkg
SETUPVARS: $(INSTALL_DIR)\setupvars.bat
CUSTOM_OP_LIB: $(BIN_DIR)\user_ov_extensions.dll
CUSTOM_OP_LIB: $(BUILD_DIR_CONTRIB)\user_ie_extensions\user_ov_extensions.dll
GRADLE_VER: 7.1.1
PYTHON_EXE: C:\hostedtoolcache\windows\Python\3.8.2\x64\python.exe

steps:
- script: |
powershell -command "Invoke-RestMethod -Headers @{\"Metadata\"=\"true\"} -Method GET -Uri http://169.254.169.254/metadata/instance/compute?api-version=2019-06-01 | format-custom"
where python3
python3 --version
where python
python --version
where $(PYTHON_EXE)
$(PYTHON_EXE) --version
where java
java -version
wmic computersystem get TotalPhysicalMemory
Expand All @@ -74,6 +73,7 @@ jobs:
- script: |
rd /Q /S $(WORK_DIR) & mkdir $(WORK_DIR)
rd /Q /S $(BUILD_DIR) & mkdir $(BUILD_DIR)
rd /Q /S $(BUILD_DIR_CONTRIB) & mkdir $(BUILD_DIR_CONTRIB)
displayName: 'Make dir'
- checkout: self
Expand All @@ -99,20 +99,20 @@ jobs:
powershell -command "Expand-Archive -Force ninja-win.zip"
powershell -command "Invoke-WebRequest https://services.gradle.org/distributions/gradle-$(GRADLE_VER)-bin.zip -OutFile gradle-$(GRADLE_VER)-bin.zip"
powershell -command "Expand-Archive -Force gradle-$(GRADLE_VER)-bin.zip"
python -m pip install --upgrade pip
python -m pip install -r $(OPENVINO_REPO_DIR)\src\bindings\python\src\compatibility\openvino\requirements-dev.txt
python -m pip install -r $(OPENVINO_REPO_DIR)\src\bindings\python\requirements.txt
python -m pip install -r $(REPO_DIR)\modules\custom_operations\tests\requirements.txt
python -m pip install $(OPENVINO_REPO_DIR)\tools\mo
$(PYTHON_EXE) -m pip install --upgrade pip
$(PYTHON_EXE) -m pip install -r $(OPENVINO_REPO_DIR)\src\bindings\python\src\compatibility\openvino\requirements-dev.txt
$(PYTHON_EXE) -m pip install -r $(OPENVINO_REPO_DIR)\src\bindings\python\requirements.txt
$(PYTHON_EXE) -m pip install -r $(REPO_DIR)\modules\custom_operations\tests\requirements.txt
$(PYTHON_EXE) -m pip install $(OPENVINO_REPO_DIR)\tools\mo
powershell -command "Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))"
choco install opencv -y
workingDirectory: $(WORK_DIR)
displayName: 'Install dependencies'
- script: |
set PATH=$(WORK_DIR)\ninja-win;%PATH%
set OpenCV_DIR=C:\tools\opencv\build
call "$(MSVS_VARS_PATH)" && cmake -GNinja ^
call "$(MSVS_VARS_PATH)"
cmake -GNinja ^
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) ^
-DBUILD_nvidia_plugin=OFF ^
-DENABLE_OV_TF_FRONTEND=OFF ^
Expand All @@ -123,32 +123,49 @@ jobs:
-DENABLE_INTEL_GPU=OFF ^
-DENABLE_INTEL_GNA=OFF ^
-DENABLE_CPPLINT=OFF ^
-DENABLE_SAMPLES=OFF ^
-DENABLE_OV_ONNX_FRONTEND=ON ^
-DOPENVINO_EXTRA_MODULES=$(REPO_DIR)/modules ^
-DENABLE_PYTHON=ON ^
-DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" ^
-DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" ^
$(OPENVINO_REPO_DIR)
workingDirectory: $(BUILD_DIR)
displayName: 'CMake OpenVINO Contrib'
displayName: 'CMake OpenVINO'
- script: dir $(OPENVINO_REPO_DIR)\temp\ /s
displayName: 'List temp SDKs'

- script: call "$(MSVS_VARS_PATH)" && $(WORK_DIR)\ninja-win\ninja
workingDirectory: $(BUILD_DIR)
displayName: 'Build OpenVINO Contrib'
displayName: 'Build OpenVINO'

- script: dir $(OPENVINO_REPO_DIR)\bin\ /s
displayName: 'List bin files'

- script: cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P cmake_install.cmake
workingDirectory: $(BUILD_DIR)
displayName: 'Install OpenVINO Contrib'
displayName: 'Install OpenVINO'

- script: dir $(INSTALL_DIR) /s
displayName: 'List install files'

- script: |
set PATH=$(WORK_DIR)\ninja-win;%PATH%
set OpenCV_DIR=C:\tools\opencv\build
call "$(SETUPVARS)"
call "$(MSVS_VARS_PATH)"
cmake -GNinja ^
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) ^
-DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" ^
-DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" ^
$(REPO_DIR)\modules\custom_operations
workingDirectory: $(BUILD_DIR_CONTRIB)
displayName: 'CMake OpenVINO Contrib'
- script: call "$(MSVS_VARS_PATH)" && $(WORK_DIR)\ninja-win\ninja
workingDirectory: $(BUILD_DIR_CONTRIB)
displayName: 'Build OpenVINO Contrib'

- script: |
call $(SETUPVARS)
set PATH=$(WORK_DIR)\gradle-$(GRADLE_VER)-bin\gradle-$(GRADLE_VER)\bin;%PATH%
Expand All @@ -159,7 +176,7 @@ jobs:
- script: |
call C:\tools\opencv\build\setup_vars_opencv4.cmd
call $(SETUPVARS)
python -m pytest -k "not sparse_conv" tests\run_tests.py
call $(SETUPVARS) -pyver 3.8 && ^
$(PYTHON_EXE) -m pytest -k "not sparse_conv" tests\run_tests.py
workingDirectory: $(REPO_DIR)\modules\custom_operations
displayName: 'Custom user operation tests'
25 changes: 17 additions & 8 deletions modules/custom_operations/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,8 +1,17 @@
# Copyright (C) 2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

cmake_minimum_required(VERSION 3.13)
project(openvino_extensions)

add_subdirectory(user_ie_extensions)
# Copyright (C) 2022-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

cmake_minimum_required(VERSION 3.13)

project(openvino_extensions)

include(cmake/platforms.cmake)

# Find OpenVINODeveloperPackage first to compile with SDL flags
find_package(OpenVINODeveloperPackage QUIET)
if(NOT OpenVINODeveloperPackage_FOUND)
find_package(OpenVINO REQUIRED COMPONENTS Runtime)
endif()

add_subdirectory(user_ie_extensions)
6 changes: 3 additions & 3 deletions modules/custom_operations/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,12 +36,12 @@ The C++ code implementing the custom operation is in the `user_ie_extensions` di
```bash
cd openvino_contrib/modules/custom_operations
mkdir build && cd build
cmake ../user_ie_extensions -DCMAKE_BUILD_TYPE=Release && cmake --build . --parallel 4
cmake ../ -DCMAKE_BUILD_TYPE=Release && cmake --build . --parallel 4
```

If you need to build only some operations specify them with the `-DCUSTOM_OPERATIONS` option:
```bash
cmake ../user_ie_extensions -DCMAKE_BUILD_TYPE=Release -DCUSTOM_OPERATIONS="complex_mul;fft"
cmake ../ -DCMAKE_BUILD_TYPE=Release -DCUSTOM_OPERATIONS="complex_mul;fft"
```

- Please note that [OpenCV](https://opencv.org/) installation is required to build an extension for the [fft](examples/fft) operation. Other extentions still can be built without OpenCV.
Expand All @@ -67,5 +67,5 @@ compiled_model = core.compile_model(model, 'CPU')
You also can get OpenVINO IR model with Model Optimizer, just use extra `--extension` flag to specify a path to custom extensions:

```bash
mo --input_model model.onnx --extension /path/to/libuser_ov_extensions.so
ovc model.onnx --extension /path/to/libuser_ov_extensions.so
```
89 changes: 89 additions & 0 deletions modules/custom_operations/cmake/platforms.cmake
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@

# Copyright (C) 2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

if(CMAKE_CL_64)
set(MSVC64 ON)
endif()

if(WIN32 AND CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
execute_process(COMMAND ${CMAKE_CXX_COMPILER} -dumpmachine
OUTPUT_VARIABLE OPENVINO_GCC_TARGET_MACHINE
OUTPUT_STRIP_TRAILING_WHITESPACE)
if(OPENVINO_GCC_TARGET_MACHINE MATCHES "amd64|x86_64|AMD64")
set(MINGW64 ON)
endif()
endif()

if(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(OV_HOST_ARCH X86_64)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*")
set(OV_HOST_ARCH X86)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*|ARM64.*)")
set(OV_HOST_ARCH AARCH64)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
set(OV_HOST_ARCH ARM)
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^riscv64$")
set(OV_HOST_ARCH RISCV64)
endif()

macro(_ov_user_ext_detect_arch_by_processor_type)
if(CMAKE_OSX_ARCHITECTURES AND APPLE)
if(CMAKE_OSX_ARCHITECTURES STREQUAL "arm64")
set(OV_ARCH AARCH64)
elseif(CMAKE_OSX_ARCHITECTURES STREQUAL "x86_64")
set(OV_ARCH X86_64)
elseif(CMAKE_OSX_ARCHITECTURES MATCHES ".*x86_64.*" AND CMAKE_OSX_ARCHITECTURES MATCHES ".*arm64.*")
set(OV_ARCH UNIVERSAL2)
else()
message(FATAL_ERROR "Unsupported value: CMAKE_OSX_ARCHITECTURES = ${CMAKE_OSX_ARCHITECTURES}")
endif()
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(OV_ARCH X86_64)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*|wasm")
set(OV_ARCH X86)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*|ARM64.*|armv8)")
set(OV_ARCH AARCH64)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
set(OV_ARCH ARM)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^riscv64$")
set(OV_ARCH RISCV64)
endif()
endmacro()

macro(_ov_user_ext_process_msvc_generator_platform)
# if cmake -A <ARM|ARM64|x64|Win32> is passed
if(CMAKE_GENERATOR_PLATFORM STREQUAL "ARM64")
set(OV_ARCH AARCH64)
elseif(CMAKE_GENERATOR_PLATFORM STREQUAL "ARM")
set(OV_ARCH ARM)
elseif(CMAKE_GENERATOR_PLATFORM STREQUAL "x64")
set(OV_ARCH X86_64)
elseif(CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
set(OV_ARCH X86)
else()
_ov_user_ext_detect_arch_by_processor_type()
endif()
endmacro()

if(MSVC64 OR MINGW64)
_ov_user_ext_process_msvc_generator_platform()
elseif(MINGW OR (MSVC AND NOT CMAKE_CROSSCOMPILING))
_ov_user_ext_process_msvc_generator_platform()
else()
_ov_user_ext_detect_arch_by_processor_type()
endif()

set(HOST_${OV_HOST_ARCH} ON)
set(${OV_ARCH} ON)

unset(OV_ARCH)

if(CMAKE_SYSTEM_NAME STREQUAL "Emscripten")
set(EMSCRIPTEN ON)
endif()

if(UNIX AND NOT (APPLE OR ANDROID OR EMSCRIPTEN OR CYGWIN))
set(LINUX ON)
endif()
16 changes: 8 additions & 8 deletions modules/custom_operations/tests/run_tests.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
# Copyright (C) 2018-2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

from openvino.runtime import Core
from openvino.tools.mo import convert_model
from openvino import Core
from openvino import convert_model

import pytest
import numpy as np
import os


def run_test(ref_inputs, ref_res, test_onnx=False, threshold=1e-5):
def run_test(ref_inputs, ref_res, test_onnx=False, threshold=1e-5):
inputs = {}
shapes = {}
for i in range(len(ref_inputs)):
Expand All @@ -22,12 +22,12 @@ def run_test(ref_inputs, ref_res, test_onnx=False, threshold=1e-5):
core = Core()
core.add_extension(ext_path)

net = core.read_model('model.onnx') if test_onnx else convert_model('model.onnx', extensions=ext_path)
net = core.read_model('model.onnx') if test_onnx else convert_model('model.onnx', extension=ext_path)

net.reshape(shapes)
exec_net = core.compile_model(net, 'CPU')
compiled_model = core.compile_model(net, 'CPU')

out = exec_net.infer_new_request(inputs)
out = compiled_model(inputs)
out = next(iter(out.values()))

assert ref_res.shape == out.shape
Expand Down Expand Up @@ -70,7 +70,7 @@ def test_sparse_conv(in_channels, filters, kernel_size, out_pos):
from examples.sparse_conv.export_model import export

inp, ref = export(num_inp_points=1000, num_out_points=out_pos, max_grid_extent=4, in_channels=in_channels,
filters=filters, kernel_size=kernel_size, transpose=False)
filters=filters, kernel_size=kernel_size, transpose=False)
run_test(inp, ref, test_onnx=True, threshold=1e-4)


Expand All @@ -82,7 +82,7 @@ def test_sparse_conv_transpose(in_channels, filters, kernel_size, out_pos):
from examples.sparse_conv.export_model import export

inp, ref = export(num_inp_points=1000, num_out_points=out_pos, max_grid_extent=4, in_channels=in_channels,
filters=filters, kernel_size=kernel_size, transpose=True)
filters=filters, kernel_size=kernel_size, transpose=True)
run_test(inp, ref, test_onnx=True, threshold=1e-4)


Expand Down
Loading

0 comments on commit dcc05cb

Please sign in to comment.