Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rename model version to model #32

Merged
merged 3 commits into from
Jan 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -56,5 +56,5 @@ jobs:
with:
stack-name: ${{ matrix.stack-name }}
python-version: ${{ matrix.python-version }}
ref-zenml: ${{ inputs.ref-zenml || 'feature/OSSK-358-add-zenml-model-versions-list-to-cli' }}
ref-zenml: ${{ inputs.ref-zenml || 'develop' }}
ref-template: ${{ inputs.ref-template || github.ref }}
16 changes: 8 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ We will be going section by section diving into implementation details and shari
Training pipeline is designed to create a new Model Control Plane version and promote it to inference stage upon successfully passing the quality assurance at the end of the pipeline. This ensures that we always infer only on quality-assured Model Control Plane version and provides a seamless integration of required artifacts of this Model Control Plane version later on inference runs.
This is achieved by providing this configuration in `train_config.yaml` used to configure our pipeline:
```yaml
model_version:
model:
name: your_product_name
```

Expand Down Expand Up @@ -143,13 +143,13 @@ After the steps are executed we need to collect results (one best model per each
```python
from zenml import get_step_context

model_version = get_step_context().model_version
model = get_step_context().model

best_model = None
best_metric = -1
# consume artifacts attached to current model version in Model Control Plane
for step_name in step_names:
hp_output = model_version.get_data_artifact(
hp_output = model.get_data_artifact(
step_name=step_name, name="hp_result"
)
model: ClassifierMixin = hp_output.load()
Expand Down Expand Up @@ -239,7 +239,7 @@ By doing so we ensure that the best-performing version will be used for inferenc
The Deployment pipeline is designed to run with inference Model Control Plane version context. This ensures that we always infer only on quality-assured Model Control Plane version and provide seamless integration of required artifacts created during training of this Model Control Plane version.
This is achieved by providing this configuration in `deployer_config.yaml` used to configure our pipeline:
```yaml
model_version:
model:
name: your_product_name
version: production
```
Expand All @@ -259,7 +259,7 @@ NOTE: In this template a prediction service is only created for local orchestrat
The Batch Inference pipeline is designed to run with inference Model Control Plane version context. This ensures that we always infer only on quality-assured Model Control Plane version and provide seamless integration of required artifacts created during training of this Model Control Plane version.
This is achieved by providing this configuration in `inference_config.yaml` used to configure our pipeline:
```yaml
model_version:
model:
name: your_product_name
version: production
```
Expand Down Expand Up @@ -324,18 +324,18 @@ NOTE: On non-local orchestrators a `model` artifact will be loaded into memory t
def inference_predict(
dataset_inf: pd.DataFrame,
) -> Annotated[pd.Series, "predictions"]:
model_version = get_step_context().model_version
model = get_step_context().model

# get predictor
predictor_service: Optional[MLFlowDeploymentService] = model_version.get_endpoint_artifact(
predictor_service: Optional[MLFlowDeploymentService] = model.get_endpoint_artifact(
"mlflow_deployment"
).load()
if predictor_service is not None:
# run prediction from service
predictions = predictor_service.predict(request=dataset_inf)
else:
# run prediction from memory
predictor = model_version.get_model_artifact("model").load()
predictor = model.get_model_artifact("model").load()
predictions = predictor.predict(dataset_inf)

predictions = pd.Series(predictions, name="predicted")
Expand Down
2 changes: 1 addition & 1 deletion template/configs/deployer_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ steps:
notify_on_success: False

# configuration of the Model Control Plane
model_version:
model:
name: {{ product_name }}
version: {{ target_environment }}

Expand Down
2 changes: 1 addition & 1 deletion template/configs/inference_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ steps:
notify_on_success: False

# configuration of the Model Control Plane
model_version:
model:
name: {{ product_name }}
version: {{ target_environment }}

Expand Down
2 changes: 1 addition & 1 deletion template/configs/train_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ steps:
notify_on_success: False

# configuration of the Model Control Plane
model_version:
model:
name: {{ product_name }}
license: {{ open_source_license }}
description: {{ product_name }} E2E Batch Use Case
Expand Down
6 changes: 3 additions & 3 deletions template/steps/deployment/deployment_deploy.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,12 +42,12 @@ def deployment_deploy() -> (
"""
### ADD YOUR OWN CODE HERE - THIS IS JUST AN EXAMPLE ###
if Client().active_stack.orchestrator.flavor == "local":
model_version = get_step_context().model_version
model = get_step_context().model

# deploy predictor service
deployment_service = mlflow_model_registry_deployer_step.entrypoint(
registry_model_name=model_version.name,
registry_model_version=model_version.run_metadata["model_registry_version"].value,
registry_model_name=model.name,
registry_model_version=model.run_metadata["model_registry_version"].value,
replace_existing=True,
)
else:
Expand Down
6 changes: 3 additions & 3 deletions template/steps/inference/inference_predict.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,12 +35,12 @@ def inference_predict(
The predictions as pandas series
"""
### ADD YOUR OWN CODE HERE - THIS IS JUST AN EXAMPLE ###
model_version = get_step_context().model_version
model = get_step_context().model

# get predictor
predictor_service: Optional[
MLFlowDeploymentService
] = model_version.load_artifact("mlflow_deployment")
] = model.load_artifact("mlflow_deployment")
if predictor_service is not None:
# run prediction from service
predictions = predictor_service.predict(request=dataset_inf)
Expand All @@ -50,7 +50,7 @@ def inference_predict(
"as the orchestrator is not local."
)
# run prediction from memory
predictor = model_version.load_artifact("model")
predictor = model.load_artifact("model")
predictions = predictor.predict(dataset_inf)

predictions = pd.Series(predictions, name="predicted")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ from typing_extensions import Annotated
import pandas as pd
from sklearn.metrics import accuracy_score
from zenml import step, get_step_context
from zenml.model.model_version import ModelVersion
from zenml import Model
from zenml.logger import get_logger

logger = get_logger(__name__)
Expand Down Expand Up @@ -42,8 +42,8 @@ def compute_performance_metrics_on_current_data(
logger.info("Evaluating model metrics...")

# Get model version numbers from Model Control Plane
latest_version = get_step_context().model_version
current_version = ModelVersion(name=latest_version.name, version=target_env)
latest_version = get_step_context().model
current_version = Model(name=latest_version.name, version=target_env)

latest_version_number = latest_version.number
current_version_number = current_version.number
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# {% include 'template/license_header' %}

from zenml import get_step_context, step
from zenml.model.model_version import ModelVersion
from zenml import Model
from zenml.logger import get_logger

from utils import promote_in_model_registry
Expand Down Expand Up @@ -44,8 +44,8 @@ def promote_with_metric_compare(
should_promote = True

# Get model version numbers from Model Control Plane
latest_version = get_step_context().model_version
current_version = ModelVersion(name=latest_version.name, version=target_env)
latest_version = get_step_context().model
current_version = Model(name=latest_version.name, version=target_env)

current_version_number = current_version.number

Expand All @@ -69,8 +69,8 @@ def promote_with_metric_compare(

if should_promote:
# Promote in Model Control Plane
model_version = get_step_context().model_version
model_version.set_stage(stage=target_env, force=True)
model = get_step_context().model
model.set_stage(stage=target_env, force=True)
logger.info(f"Current model version was promoted to '{target_env}'.")

# Promote in Model Registry
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# {% include 'template/license_header' %}

from zenml import get_step_context, step
from zenml.model.model_version import ModelVersion
from zenml import Model
from zenml.logger import get_logger

from utils import promote_in_model_registry
Expand All @@ -22,13 +22,13 @@ def promote_latest_version(

### ADD YOUR OWN CODE HERE - THIS IS JUST AN EXAMPLE ###
# Get model version numbers from Model Control Plane
latest_version = get_step_context().model_version
current_version = ModelVersion(name=latest_version.name, version=target_env)
latest_version = get_step_context().model
current_version = Model(name=latest_version.name, version=target_env)
logger.info(f"Promoting latest model version `{latest_version}`")

# Promote in Model Control Plane
model_version = get_step_context().model_version
model_version.set_stage(stage=target_env, force=True)
model = get_step_context().model
model.set_stage(stage=target_env, force=True)
logger.info(f"Current model version was promoted to '{target_env}'.")

# Promote in Model Registry
Expand Down
4 changes: 2 additions & 2 deletions template/steps/training/model_trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,8 +84,8 @@ def model_trainer(
if model_registry:
versions = model_registry.list_model_versions(name=name)
if versions:
model_version = get_step_context().model_version
model_version.log_metadata({"model_registry_version": versions[-1].version})
model_ = get_step_context().model
model_.log_metadata({"model_registry_version": versions[-1].version})
### YOUR CODE ENDS HERE ###

return model
Original file line number Diff line number Diff line change
Expand Up @@ -25,17 +25,17 @@ def hp_tuning_select_best_model(
The best possible model class and its' parameters.
"""
### ADD YOUR OWN CODE HERE - THIS IS JUST AN EXAMPLE ###
model_version = get_step_context().model_version
model = get_step_context().model

best_model = None
best_metric = -1
# consume artifacts attached to current model version in Model Control Plane
for step_name in step_names:
hp_output = model_version.get_data_artifact("hp_result")
model: ClassifierMixin = hp_output.load()
hp_output = model.get_data_artifact("hp_result")
model_: ClassifierMixin = hp_output.load()
# fetch metadata we attached earlier
metric = float(hp_output.run_metadata["metric"].value)
if best_model is None or best_metric < metric:
best_model = model
best_model = model_
### YOUR CODE ENDS HERE ###
return best_model