-
Notifications
You must be signed in to change notification settings - Fork 457
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VertexAI Model-Registry
& Model-Deployer
#3161
base: develop
Are you sure you want to change the base?
Conversation
- Update model source URI retrieval in VertexAIModelRegistry. - Enhance BaseModelDeployer to check and start inactive services. - Set default replica counts to 1 and sync to False in VertexBaseConfig. - Rename and update documentation for deployment service creation in VertexModelDeployer.
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left initial thoughts. Also should we add these to the 1-stack deployment of the GCP full stack endpoint?
src/zenml/integrations/gcp/flavors/vertex_model_deployer_flavor.py
Outdated
Show resolved
Hide resolved
src/zenml/integrations/gcp/model_registries/vertex_model_registry.py
Outdated
Show resolved
Hide resolved
self.setup_aiplatform() | ||
try: | ||
model_version = aiplatform.ModelVersion( | ||
model_name=f"{name}@{version}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Above for the display name we use name_version
, are you sure this is the correct name to delete? Where to we specify it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have to revive this thread too: you use name@version
here, but this is different than the convention you use when you create the model version, which is name_version
.
Besides this, I strongly suggest that you stop using this flat naming scheme in VertexAI and actually start using the Vertex AI model versions and properly mapping them to ZenML model versions instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After reading the VertexAI docs, it was apparent to me that they actually support this type of syntax, where X@Y
really means version X of model Y
. However, this only works with model IDs, not with the model display name.
NLP template updates in |
LLM Finetuning template updates in |
Classification template updates in |
E2E template updates in |
NLP template updates in |
️✅ There are no secrets present in this pull request anymore.If these secrets were true positive and are still valid, we highly recommend you to revoke them. 🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request. |
@@ -62,14 +63,24 @@ class ModelRegistryModelMetadata(BaseModel): | |||
model and its development process. | |||
""" | |||
|
|||
zenml_version: Optional[str] = None | |||
_managed_by: str = "zenml" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Neither this field nor its associated property are actually used anywhere. Is there something missing ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will return this as a set of key-value when ModelRegistryModelMetadata
object is called
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then why not call it managed_by
?
LLM Finetuning template updates in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The style guide flagged several spelling errors that seemed like false positives. We skipped posting inline suggestions for the following words:
- Deployer
- MLflow
# Register the model deployer | ||
zenml model-deployer register vertex_deployer \ | ||
--flavor=vertex \ | ||
--location=us-central1 | ||
|
||
# Connect the model deployer to the service connector | ||
zenml model-deployer connect vertex_deployer --connector vertex_deployer_connector |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Register the model deployer | |
zenml model-deployer register vertex_deployer \ | |
--flavor=vertex \ | |
--location=us-central1 | |
# Connect the model deployer to the service connector | |
zenml model-deployer connect vertex_deployer --connector vertex_deployer_connector | |
# Register the model deployer and connect it to the service connector | |
zenml model-deployer register vertex_deployer \ | |
--flavor=vertex \ | |
--location=us-central1 \ | |
--connector vertex_deployer_connector |
* Integrating model serving with other GCP services | ||
|
||
{% hint style="warning" %} | ||
The Vertex AI Model Deployer requires a Vertex AI Model Registry to be present in your stack. Make sure you have configured both components properly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should link the model registry page here too, for easier access.
version: Optional[str] = None | ||
serving_container_image_uri: Optional[str] = None | ||
artifact_uri: Optional[str] = None | ||
model_id: Optional[str] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You may have seen this warning when you import this class:
/home/stefan/aspyre/src/zenml/.venv/lib/python3.10/site-packages/pydantic/_internal/_fields.py:161: UserWarning: Field "model_id" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
I think you should consider doing what the warning says, to keep the logs clear of unwanted warnings. You can read more about it here: https://docs.pydantic.dev/latest/api/config/#pydantic.config.ConfigDict.protected_namespaces
@@ -62,14 +63,24 @@ class ModelRegistryModelMetadata(BaseModel): | |||
model and its development process. | |||
""" | |||
|
|||
zenml_version: Optional[str] = None | |||
_managed_by: str = "zenml" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then why not call it managed_by
?
|
||
1. **Stack Requirements**: | ||
- Requires a Vertex AI Model Registry in the stack | ||
- All stack components must be non-local |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be mistaken, but this also works with a local orchestrator.
@@ -50,3 +46,63 @@ class SklearnMaterializer(CloudpickleMaterializer): | |||
TransformerMixin, | |||
) | |||
ASSOCIATED_ARTIFACT_TYPE: ClassVar[ArtifactType] = ArtifactType.MODEL | |||
|
|||
def load(self, data_type: Type[Any]) -> Any: | |||
"""Reads a sklearn model from pickle file with backward compatibility. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aside from the comment above, I have another generic observation about this type of change.
The problem here is the model serialization is not formatted as expected by the model deployer (different filename is expected). This sort of problem will happen more often than you think, so this approach of pushing the solution all the way to the materializer will not scale well, because you might even have different deployers expecting different formats / file names / file structures.
This is a model format conversion problem and it should generally be solved using other means:
- make the materializer concept aware of "format" - i.e. the same artifact type / data type can support different formats implemented by different materializer, or even by the same materializer - and using materializers to convert the model from one format to another in the model registry or even in the model deployer
- implement N-to-M model conversion either as a feature included in each model registry/ model deployer, or as a built-in ZenML feature, or as an integration feature, or all of them at the same time
- something we haven't thought of yet
) -> List[RegisteredModel]: | ||
"""List models in the Vertex AI model registry.""" | ||
self.setup_aiplatform() | ||
filter_expr = 'labels.managed_by="zenml"' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This says managed_by
, but the attribute in the ModelRegistryModelMetadata
class is called _managed_by
.
Generally, I think you should get rid of the _managed_by
attribute in ModelRegistryModelMetadata
, because this is an internal implementation detail and should not be exposed to the user like that, where it can be messed up with, and just add this managed_by
internal label (and possibly other internal labels, like stage
, version
) on top of the ones provided by the user in this implementation.
UUID_SLICE_LENGTH: int = 8 | ||
|
||
|
||
def sanitize_vertex_label(value: str) -> str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect, you already have a label sanitizer. You should use this in the model registry implementation too.
return value[:63] | ||
|
||
|
||
class VertexDeploymentConfig(VertexBaseConfig, ServiceConfig): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have some conflicting information in this class:
model_id
- comes from VertexBaseConfig
model_name
and model_version
- come from ServiceConfig
You should get rid of model_id
and just use model_name
and model_version
, as do all other model deployers.
# Then get the model | ||
model = aiplatform.Model( | ||
model_name=self.config.model_id, | ||
location=self.config.location, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Getting the model here should be done by fetching the VertexAI model version that corresponds to the model name and version supplied by the user in the service configuration, e.g.:
model_deployer._get_internal_model_version(
model_name=self.config.model_name,
version=self.config.model_version,
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried this every which way, and the only successful result I got was if I used the Vertex model ID in it, not the model name:
aiplatform.Model('2943163929137774592', version="1", location="europe-west3")
Using the model name doesn't work (returns "google.api_core.exceptions.NotFound: 404 The Model does not exist"):
aiplatform.Model('e2e_use_case_21', version="1", location="europe-west3")
However, you don't want the user to have to know or even be aware of the actual Vertex AI model ID to be able to deploy a model, you want them to keep using the model name and version. Meaning that you need to make the conversion yourself and fetch the model by name+version instead.
|
||
from google.api_core import exceptions | ||
from google.cloud import aiplatform | ||
from google.cloud import logging as vertex_logging |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This module comes from a package that is not currently part of the GCP integration:
│ ❱ 22 from google.cloud import logging as vertex_logging │
│ 23 from pydantic import BaseModel, Field │
│ 24 │
│ 25 from zenml.client import Client │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError: cannot import name 'logging' from 'google.cloud' (unknown location)
You need to add google-cloud-logging
to the integration's list of package requirements.
self.logging_client = vertex_logging.Client( | ||
project=project_id, credentials=credentials | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't work for me, given that this is a Pydantic model and you can't simply set new attributes that aren't declared in the class definition:
ValidationError: 1 validation error for VertexDeploymentService
logging_client
Object has no attribute 'logging_client' [type=no_such_attribute, input_value=<google.cloud.logging_v2....bject at 0x79c591f5dbd0>, input_type=Client]
For further information visit https://errors.pydantic.dev/2.8/v/no_such_attribute
You have to make this a private attribute like _logging_client
instead.
aiplatform.init( | ||
project=project_id, | ||
location=self.config.location, | ||
credentials=credentials, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a note that calling this will authenticate the entire Vertex AI library globally. If there's some other code that calls the same function (like instantiating a Vertex orchestrator or Vertex model deployer), it will overwrite it, creating authentication errors that are very difficult to pinpoint and debug.
The right way to do this would be to create an individual client or session with its own credentials and attach it to this object, but clients are far more difficult to work with:
credentials = ... # Load your custom credentials
project_id = "your-project-id"
location = "us-central1"
# Initialize the ModelServiceClient
client = aiplatform_v1.ModelServiceClient(credentials=credentials)
# Define the parent resource (project and location)
parent = f"projects/{project_id}/locations/{location}"
# List models with optional filtering
filter_expr = "display_name:my-model-name" # Optional filter expression
models = client.list_models(parent=parent, filter=filter_expr)
Instead, I recommend you just check the other VertexAI stack components and make sure that they always re-initialize the global VertexAI session before they call any of its APIs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update: you might be able to pass the credentials straight to the Model object, at least according to the docs:
Model(
model_name: str,
project: typing.Optional[str] = None,
location: typing.Optional[str] = None,
credentials: typing.Optional[google.auth.credentials.Credentials] = None,
version: typing.Optional[str] = None,
)```
if self.config.existing_endpoint: | ||
# Use the existing endpoint | ||
endpoint = aiplatform.Endpoint( | ||
endpoint_name=self.config.existing_endpoint, | ||
location=self.config.location, | ||
) | ||
logger.info( | ||
f"Using existing Vertex AI inference endpoint: {endpoint.resource_name}" | ||
) | ||
else: | ||
# Create the endpoint | ||
endpoint_name = self._generate_endpoint_name() | ||
endpoint = aiplatform.Endpoint.create( | ||
display_name=endpoint_name, | ||
location=self.config.location, | ||
encryption_spec_key_name=self.config.encryption_spec_key_name, | ||
labels=self.config.get_vertex_deployment_labels(), | ||
) | ||
logger.info( | ||
f"Vertex AI inference endpoint created: {endpoint.resource_name}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might make sense to not create the endpoint until you verify that the model exists.
try: | ||
version_info = aiplatform.Model.upload( | ||
artifact_uri=model_source_uri, | ||
display_name=f"{name}_{version}", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I remember one other important thing related to multi-tenancy: imagine that multiple zenml tenants/servers might want to register models in the same region. If they want to use the same model name, there might be a name clash between them (I don't know if you can have two Vertex AI models with the same display name). To avoid this:
- you could use the ZenML server UUID as a label to differentiate, and filter results only by that label
- if the display name also has to be unique, you could prepend the server UUID to the display name, or at least part of it
) | ||
else: | ||
# Create the endpoint | ||
endpoint_name = self._generate_endpoint_name() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same thing here: multiple zenml tenants or servers might want to use the same model name, and therefore might clash on the same endpoint name. You could avoid this by using the server UUID to differentiate/filter endpoint resources and also as part of the endpoint name.
Describe changes
TODO
Pre-requisites
Please ensure you have done the following:
develop
and the open PR is targetingdevelop
. If your branch wasn't based on develop read Contribution guide on rebasing branch to develop.Types of changes