Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VertexAI Model-Registry & Model-Deployer #3161

Open
wants to merge 33 commits into
base: develop
Choose a base branch
from

Conversation

safoinme
Copy link
Contributor

@safoinme safoinme commented Oct 31, 2024

Describe changes

TODO

  • Polish docs.
  • Uploads project example to projects repo

Pre-requisites

Please ensure you have done the following:

  • I have read the CONTRIBUTING.md document.
  • If my change requires a change to docs, I have updated the documentation accordingly.
  • I have added tests to cover my changes.
  • I have based my new branch on develop and the open PR is targeting develop. If your branch wasn't based on develop read Contribution guide on rebasing branch to develop.
  • If my changes require changes to the dashboard, these changes are communicated/requested.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Other (add details above)

Copy link
Contributor

coderabbitai bot commented Oct 31, 2024

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions github-actions bot added internal To filter out internal PRs and issues enhancement New feature or request labels Oct 31, 2024
@safoinme safoinme requested a review from strickvl October 31, 2024 12:50
@htahir1 htahir1 requested review from bcdurak and schustmi October 31, 2024 13:00
Copy link
Contributor

@htahir1 htahir1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left initial thoughts. Also should we add these to the 1-stack deployment of the GCP full stack endpoint?

src/zenml/integrations/gcp/__init__.py Outdated Show resolved Hide resolved
src/zenml/integrations/gcp/__init__.py Outdated Show resolved Hide resolved
src/zenml/integrations/gcp/services/vertex_deployment.py Outdated Show resolved Hide resolved
self.setup_aiplatform()
try:
model_version = aiplatform.ModelVersion(
model_name=f"{name}@{version}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Above for the display name we use name_version, are you sure this is the correct name to delete? Where to we specify it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have to revive this thread too: you use name@version here, but this is different than the convention you use when you create the model version, which is name_version.

Besides this, I strongly suggest that you stop using this flat naming scheme in VertexAI and actually start using the Vertex AI model versions and properly mapping them to ZenML model versions instead.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After reading the VertexAI docs, it was apparent to me that they actually support this type of syntax, where X@Y really means version X of model Y. However, this only works with model IDs, not with the model display name.

@safoinme safoinme requested review from stefannica and removed request for strickvl and bcdurak November 4, 2024 08:08
Copy link
Contributor

github-actions bot commented Nov 7, 2024

NLP template updates in examples/e2e_nlp have been pushed.

Copy link
Contributor

github-actions bot commented Nov 7, 2024

LLM Finetuning template updates in examples/llm_finetuning have been pushed.

@safoinme safoinme requested review from schustmi and htahir1 November 7, 2024 15:07
Copy link
Contributor

github-actions bot commented Nov 7, 2024

Classification template updates in examples/mlops_starter have been pushed.

Copy link
Contributor

github-actions bot commented Nov 7, 2024

E2E template updates in examples/e2e have been pushed.

Copy link
Contributor

github-actions bot commented Nov 7, 2024

NLP template updates in examples/e2e_nlp have been pushed.

Copy link

gitguardian bot commented Nov 8, 2024

️✅ There are no secrets present in this pull request anymore.

If these secrets were true positive and are still valid, we highly recommend you to revoke them.
Once a secret has been leaked into a git repository, you should consider it compromised, even if it was deleted immediately.
Find here more information about risks.


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

@@ -62,14 +63,24 @@ class ModelRegistryModelMetadata(BaseModel):
model and its development process.
"""

zenml_version: Optional[str] = None
_managed_by: str = "zenml"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neither this field nor its associated property are actually used anywhere. Is there something missing ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will return this as a set of key-value when ModelRegistryModelMetadata object is called

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then why not call it managed_by ?

Copy link
Contributor

LLM Finetuning template updates in examples/llm_finetuning have been pushed.

@safoinme safoinme requested a review from stefannica November 12, 2024 15:40
Copy link
Contributor

@hyperlint-ai hyperlint-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The style guide flagged several spelling errors that seemed like false positives. We skipped posting inline suggestions for the following words:

  • Deployer
  • MLflow

Comment on lines +47 to +53
# Register the model deployer
zenml model-deployer register vertex_deployer \
--flavor=vertex \
--location=us-central1

# Connect the model deployer to the service connector
zenml model-deployer connect vertex_deployer --connector vertex_deployer_connector
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Register the model deployer
zenml model-deployer register vertex_deployer \
--flavor=vertex \
--location=us-central1
# Connect the model deployer to the service connector
zenml model-deployer connect vertex_deployer --connector vertex_deployer_connector
# Register the model deployer and connect it to the service connector
zenml model-deployer register vertex_deployer \
--flavor=vertex \
--location=us-central1 \
--connector vertex_deployer_connector

* Integrating model serving with other GCP services

{% hint style="warning" %}
The Vertex AI Model Deployer requires a Vertex AI Model Registry to be present in your stack. Make sure you have configured both components properly.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should link the model registry page here too, for easier access.

version: Optional[str] = None
serving_container_image_uri: Optional[str] = None
artifact_uri: Optional[str] = None
model_id: Optional[str] = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may have seen this warning when you import this class:

/home/stefan/aspyre/src/zenml/.venv/lib/python3.10/site-packages/pydantic/_internal/_fields.py:161: UserWarning: Field "model_id" has conflict with protected namespace "model_".

You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.

I think you should consider doing what the warning says, to keep the logs clear of unwanted warnings. You can read more about it here: https://docs.pydantic.dev/latest/api/config/#pydantic.config.ConfigDict.protected_namespaces

@@ -62,14 +63,24 @@ class ModelRegistryModelMetadata(BaseModel):
model and its development process.
"""

zenml_version: Optional[str] = None
_managed_by: str = "zenml"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then why not call it managed_by ?


1. **Stack Requirements**:
- Requires a Vertex AI Model Registry in the stack
- All stack components must be non-local
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be mistaken, but this also works with a local orchestrator.

@@ -50,3 +46,63 @@ class SklearnMaterializer(CloudpickleMaterializer):
TransformerMixin,
)
ASSOCIATED_ARTIFACT_TYPE: ClassVar[ArtifactType] = ArtifactType.MODEL

def load(self, data_type: Type[Any]) -> Any:
"""Reads a sklearn model from pickle file with backward compatibility.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aside from the comment above, I have another generic observation about this type of change.
The problem here is the model serialization is not formatted as expected by the model deployer (different filename is expected). This sort of problem will happen more often than you think, so this approach of pushing the solution all the way to the materializer will not scale well, because you might even have different deployers expecting different formats / file names / file structures.

This is a model format conversion problem and it should generally be solved using other means:

  • make the materializer concept aware of "format" - i.e. the same artifact type / data type can support different formats implemented by different materializer, or even by the same materializer - and using materializers to convert the model from one format to another in the model registry or even in the model deployer
  • implement N-to-M model conversion either as a feature included in each model registry/ model deployer, or as a built-in ZenML feature, or as an integration feature, or all of them at the same time
  • something we haven't thought of yet

) -> List[RegisteredModel]:
"""List models in the Vertex AI model registry."""
self.setup_aiplatform()
filter_expr = 'labels.managed_by="zenml"'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This says managed_by, but the attribute in the ModelRegistryModelMetadata class is called _managed_by.

Generally, I think you should get rid of the _managed_by attribute in ModelRegistryModelMetadata, because this is an internal implementation detail and should not be exposed to the user like that, where it can be messed up with, and just add this managed_by internal label (and possibly other internal labels, like stage, version) on top of the ones provided by the user in this implementation.

UUID_SLICE_LENGTH: int = 8


def sanitize_vertex_label(value: str) -> str:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect, you already have a label sanitizer. You should use this in the model registry implementation too.

return value[:63]


class VertexDeploymentConfig(VertexBaseConfig, ServiceConfig):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have some conflicting information in this class:

model_id - comes from VertexBaseConfig
model_name and model_version - come from ServiceConfig

You should get rid of model_id and just use model_name and model_version, as do all other model deployers.

Comment on lines +204 to +208
# Then get the model
model = aiplatform.Model(
model_name=self.config.model_id,
location=self.config.location,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Getting the model here should be done by fetching the VertexAI model version that corresponds to the model name and version supplied by the user in the service configuration, e.g.:

model_deployer._get_internal_model_version(
                model_name=self.config.model_name,
                version=self.config.model_version,
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried this every which way, and the only successful result I got was if I used the Vertex model ID in it, not the model name:

aiplatform.Model('2943163929137774592', version="1", location="europe-west3")

Using the model name doesn't work (returns "google.api_core.exceptions.NotFound: 404 The Model does not exist"):

aiplatform.Model('e2e_use_case_21', version="1", location="europe-west3")

However, you don't want the user to have to know or even be aware of the actual Vertex AI model ID to be able to deploy a model, you want them to keep using the model name and version. Meaning that you need to make the conversion yourself and fetch the model by name+version instead.


from google.api_core import exceptions
from google.cloud import aiplatform
from google.cloud import logging as vertex_logging
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This module comes from a package that is not currently part of the GCP integration:

│ ❱  22 from google.cloud import logging as vertex_logging                                         │
│    23 from pydantic import BaseModel, Field                                                      │
│    24                                                                                            │
│    25 from zenml.client import Client                                                            │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError: cannot import name 'logging' from 'google.cloud' (unknown location)

You need to add google-cloud-logging to the integration's list of package requirements.

Comment on lines +139 to +141
self.logging_client = vertex_logging.Client(
project=project_id, credentials=credentials
)
Copy link
Contributor

@stefannica stefannica Dec 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't work for me, given that this is a Pydantic model and you can't simply set new attributes that aren't declared in the class definition:

ValidationError: 1 validation error for VertexDeploymentService
logging_client
  Object has no attribute 'logging_client' [type=no_such_attribute, input_value=<google.cloud.logging_v2....bject at 0x79c591f5dbd0>, input_type=Client]
    For further information visit https://errors.pydantic.dev/2.8/v/no_such_attribute

You have to make this a private attribute like _logging_client instead.

Comment on lines +54 to +58
aiplatform.init(
project=project_id,
location=self.config.location,
credentials=credentials,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a note that calling this will authenticate the entire Vertex AI library globally. If there's some other code that calls the same function (like instantiating a Vertex orchestrator or Vertex model deployer), it will overwrite it, creating authentication errors that are very difficult to pinpoint and debug.

The right way to do this would be to create an individual client or session with its own credentials and attach it to this object, but clients are far more difficult to work with:

credentials = ...  # Load your custom credentials
project_id = "your-project-id"
location = "us-central1"

# Initialize the ModelServiceClient
client = aiplatform_v1.ModelServiceClient(credentials=credentials)

# Define the parent resource (project and location)
parent = f"projects/{project_id}/locations/{location}"

# List models with optional filtering
filter_expr = "display_name:my-model-name"  # Optional filter expression
models = client.list_models(parent=parent, filter=filter_expr)

Instead, I recommend you just check the other VertexAI stack components and make sure that they always re-initialize the global VertexAI session before they call any of its APIs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update: you might be able to pass the credentials straight to the Model object, at least according to the docs:

Model(
    model_name: str,
    project: typing.Optional[str] = None,
    location: typing.Optional[str] = None,
    credentials: typing.Optional[google.auth.credentials.Credentials] = None,
    version: typing.Optional[str] = None,
)```

Comment on lines +182 to +201
if self.config.existing_endpoint:
# Use the existing endpoint
endpoint = aiplatform.Endpoint(
endpoint_name=self.config.existing_endpoint,
location=self.config.location,
)
logger.info(
f"Using existing Vertex AI inference endpoint: {endpoint.resource_name}"
)
else:
# Create the endpoint
endpoint_name = self._generate_endpoint_name()
endpoint = aiplatform.Endpoint.create(
display_name=endpoint_name,
location=self.config.location,
encryption_spec_key_name=self.config.encryption_spec_key_name,
labels=self.config.get_vertex_deployment_labels(),
)
logger.info(
f"Vertex AI inference endpoint created: {endpoint.resource_name}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might make sense to not create the endpoint until you verify that the model exists.

try:
version_info = aiplatform.Model.upload(
artifact_uri=model_source_uri,
display_name=f"{name}_{version}",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember one other important thing related to multi-tenancy: imagine that multiple zenml tenants/servers might want to register models in the same region. If they want to use the same model name, there might be a name clash between them (I don't know if you can have two Vertex AI models with the same display name). To avoid this:

  • you could use the ZenML server UUID as a label to differentiate, and filter results only by that label
  • if the display name also has to be unique, you could prepend the server UUID to the display name, or at least part of it

)
else:
# Create the endpoint
endpoint_name = self._generate_endpoint_name()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same thing here: multiple zenml tenants or servers might want to use the same model name, and therefore might clash on the same endpoint name. You could avoid this by using the server UUID to differentiate/filter endpoint resources and also as part of the endpoint name.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request internal To filter out internal PRs and issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants