Nauta provides you with the ability to test your trained model using TensorFlow Serving and OpenVINO Model Server (OVMS). OVMS is an OpenVINO serving component intended to provide hosting for the OpenVINO inference runtime.
For guidance on using Inference Testing to evaluate an experiment, refer to the topics shown below.
- For
nctl predict
command, its subcommands, and parameter information, refer to predict Commands.
For How-to instructions for TensorFlow Serving:
-
Refer to Batch Inference Example for running batch inference.
-
Refer to Stream Inference Example for running streaming inference.
To run prediction on OpenVINO Model Server, refer to Inference Example on OpenVINO Model Server