Out of the box, mlserver
supports the deployment and serving of catboost
models.
By default, it will assume that these models have been serialised using the save_model()
method.
In this example, we will cover how we can train and serialise a simple model, to then serve it using mlserver
.
To test the CatBoost Server, first we need to generate a simple CatBoost model using Python.
import numpy as np
from catboost import CatBoostClassifier
train_data = np.random.randint(0, 100, size=(100, 10))
train_labels = np.random.randint(0, 2, size=(100))
model = CatBoostClassifier(iterations=2,
depth=2,
learning_rate=1,
loss_function='Logloss',
verbose=True)
model.fit(train_data, train_labels)
model.save_model('model.cbm')
Our model will be persisted as a file named model.cbm
.
Now that we have trained and saved our model, the next step will be to serve it using mlserver
.
For that, we will need to create 2 configuration files:
settings.json
: holds the configuration of our server (e.g. ports, log level, etc.).model-settings.json
: holds the configuration of our model (e.g. input type, runtime to use, etc.).
%%writefile settings.json
{
"debug": "true"
}
%%writefile model-settings.json
{
"name": "catboost",
"implementation": "mlserver_catboost.CatboostModel",
"parameters": {
"uri": "./model.cbm",
"version": "v0.1.0"
}
}
Now that we have our config in-place, we can start the server by running mlserver start .
. This needs to either be ran from the same directory where our config files are or pointing to the folder where they are.
mlserver start .
Since this command will start the server and block the terminal, waiting for requests, this will need to be ran in the background on a separate terminal.
We now have our model being served by mlserver
.
To make sure that everything is working as expected, let's send a request from our test set.
For that, we can use the Python types that mlserver
provides out of box, or we can build our request manually.
import requests
import numpy as np
test_data = np.random.randint(0, 100, size=(1, 10))
x_0 = test_data[0:1]
inference_request = {
"inputs": [
{
"name": "predict-prob",
"shape": x_0.shape,
"datatype": "FP32",
"data": x_0.tolist()
}
]
}
endpoint = "http://localhost:8080/v2/models/catboost/versions/v0.1.0/infer"
response = requests.post(endpoint, json=inference_request)
print(response.json())