Skip to content

Commit

Permalink
Fixing links after the docs update (#76)
Browse files Browse the repository at this point in the history
* fixing links

* fixing even more links due to previous changes"
  • Loading branch information
bcdurak authored Aug 16, 2023
1 parent 1f3048e commit 6372929
Show file tree
Hide file tree
Showing 11 changed files with 31 additions and 36 deletions.
2 changes: 0 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,6 @@ general guidelines that cover both:
at [[email protected]](mailto:[email protected]), monitored by
our security team.
- Search for existing Issues and PRs before creating your own.
- Search on the [public Crowd Dev](https://open.crowd.dev/zenml) page to see
if the issue has been addressed before.
- We work hard to make sure issues are handled on time, but it could take a
while to investigate the root cause depending on the impact.

Expand Down
8 changes: 4 additions & 4 deletions customer-churn/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ We showcase two solutions to this problem:

## Deploy pipelines to production using orchestrator Pipelines

We will be using ZenML's [Kubeflow](https://github.com/zenml-io/zenml/tree/main/examples/kubeflow_pipelines_orchestration) integration to deploy pipelines to production using Kubeflow Pipelines on the cloud.
We will be using ZenML's [Kubeflow](https://docs.zenml.io/stacks-and-components/component-guide/orchestrators/kubeflow) integration to deploy pipelines to production using Kubeflow Pipelines on the cloud.

Our training pipeline `run_kubeflow_pipeline.py` will be built using the following steps:

Expand Down Expand Up @@ -93,7 +93,7 @@ a backend

### Setup Infrastructure with ZenML Stack recipes:

With [ZenML Stack Recipes](https://docs.zenml.io/platform-guide/set-up-your-mlops-platform/deploy-and-set-up-a-cloud-stack/deploy-a-stack-using-stack-recipes), you can now provision all the infrastructure you need to run your ZenML pipelines with just a few simple commands.
With [ZenML Stack Recipes](https://github.com/zenml-io/mlops-stacks), you can now provision all the infrastructure you need to run your ZenML pipelines with just a few simple commands.

The flow to get started for this example can be the following:

Expand Down Expand Up @@ -154,11 +154,11 @@ Seldon Core. The following diagram shows the flow of the whole pipeline:
## Continuous model deployment with Seldon Core
While building the real-world workflow for predicting whether a customer will churn or not, you might not want to train the model once and deploy it to production. Instead, you might want to train the model and deploy it to production when something gets triggered. This is where one of our recent integrations is valuable: [Seldon Core](https://github.com/zenml-io/zenml/tree/main/examples/seldon_deployment).
While building the real-world workflow for predicting whether a customer will churn or not, you might not want to train the model once and deploy it to production. Instead, you might want to train the model and deploy it to production when something gets triggered. This is where one of our recent integrations is valuable: [Seldon Core](https://docs.zenml.io/stacks-and-components/component-guide/model-deployers/seldon).
[Seldon Core](https://github.com/SeldonIO/seldon-core) is a production-grade open-source model serving platform. It packs a wide range of features built around deploying models to REST/GRPC microservices, including monitoring and logging, model explainers, outlier detectors, and various continuous deployment strategies such as A/B testing and canary deployments, and more.
In this project, we build a continuous deployment pipeline that trains a model and then serves it with Seldon Core as the industry-ready model deployment tool of choice. If you are interested in learning more about Seldon Core, you can check out the [ZenML example](https://github.com/zenml-io/zenml/tree/main/examples/seldon_deployment). The following diagram shows the flow of the whole pipeline:
In this project, we build a continuous deployment pipeline that trains a model and then serves it with Seldon Core as the industry-ready model deployment tool of choice. If you are interested in learning more about Seldon Core, you can check out our [docs](https://docs.zenml.io/stacks-and-components/component-guide/model-deployers/seldon). The following diagram shows the flow of the whole pipeline:
![seldondeployment](_assets/seldoncondeploy.gif)
Expand Down
4 changes: 2 additions & 2 deletions customer-satisfaction/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ Instead, we are building an end-to-end pipeline for continuously predicting and

This pipeline can be deployed to the cloud, scale up according to our needs, and ensure that we track the parameters and data that flow through every pipeline that runs. It includes raw data input, features, results, the machine learning model and model parameters, and prediction outputs. ZenML helps us to build such a pipeline in a simple, yet powerful, way.

In this Project, we give special consideration to the [MLflow integration](https://github.com/zenml-io/zenml/tree/main/examples) of ZenML. In particular, we utilize MLflow tracking to track our metrics and parameters, and MLflow deployment to deploy our model. We also use [Streamlit](https://streamlit.io/) to showcase how this model will be used in a real-world setting.
In this Project, we give special consideration to the [MLflow integration](https://docs.zenml.io/stacks-and-components/component-guide/model-deployers/mlflow) of ZenML. In particular, we utilize MLflow tracking to track our metrics and parameters, and MLflow deployment to deploy our model. We also use [Streamlit](https://streamlit.io/) to showcase how this model will be used in a real-world setting.

### Training Pipeline

Expand Down Expand Up @@ -92,7 +92,7 @@ service = load_last_service_from_step(
service.predict(...) # Predict on incoming data from the application
```

While this ZenML Project trains and deploys a model locally, other ZenML integrations such as the [Seldon](https://github.com/zenml-io/zenml/tree/main/examples/seldon_deployment) deployer can also be used in a similar manner to deploy the model in a more production setting (such as on a Kubernetes cluster). We use MLflow here for the convenience of its local deployment.
While this ZenML Project trains and deploys a model locally, other ZenML integrations such as the [Seldon](https://docs.zenml.io/stacks-and-components/component-guide/model-deployers/seldon) deployer can also be used in a similar manner to deploy the model in a more production setting (such as on a Kubernetes cluster). We use MLflow here for the convenience of its local deployment.

![training_and_deployment_pipeline](_assets/training_and_deployment_pipeline_updated.png)

Expand Down
6 changes: 3 additions & 3 deletions langchain-llamaindex-slackbot/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,13 +94,13 @@ example](https://github.com/zenml-io/zenml/tree/develop/examples/generative_chat

It is much more ideal to run a pipeline such as the
`zenml_docs_index_generation` on a regular schedule. In order to achieve that,
you have to [deploy ZenML](https://docs.zenml.io/platform-guide/set-up-your-mlops-platform/deploy-zenml)
you have to [deploy ZenML](https://docs.zenml.io/user-guide/starter-guide/switch-to-production)
and set up a stack that supports
[our scheduling
feature](https://docs.zenml.io/user-guide/advanced-guide/schedule-pipeline-runs). If you
feature](https://docs.zenml.io/user-guide/advanced-guide/pipelining-features/schedule-pipeline-runs). If you
wish to deploy the slack bot on GCP Cloud Run as described above, you'll also
need to be using [a Google Cloud Storage Artifact
Store](https://docs.zenml.io/user-guide/component-guide/artifact-stores/gcp). Note that
Store](https://docs.zenml.io/stacks-and-components/component-guide/artifact-stores/gcp). Note that
certain code artifacts like the `Dockerfile` for this project will also need to
be adapted for your own particular needs and requirements. Please check [our docs](https://docs.zenml.io/user-guide/starter-guide/follow-best-practices)
for more information.
Expand Down
2 changes: 1 addition & 1 deletion langchain-qa-hub/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,4 +67,4 @@ account can submit plugins. To find out how, check out the
[ZenML Hub Plugin Template](https://github.com/zenml-io/zenml-hub-plugin-template).
If you would like to learn more about the ZenML Hub in general, check out the
[ZenML Hub Documentation](https://docs.zenml.io/user-guide/advanced-guide/leverage-community-contributed-plugins) or the [ZenML Hub Launch Blog Post](https://blog.zenml.io/zenml-hub-launch).
[ZenML Hub Documentation](https://docs.zenml.io/user-guide/advanced-guide/environment-management/use-the-hub) or the [ZenML Hub Launch Blog Post](https://blog.zenml.io/zenml-hub-launch).
2 changes: 1 addition & 1 deletion langchain-qa-hub/langchain-qa-hub.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,7 @@
"[ZenML Hub Plugin Template](https://github.com/zenml-io/zenml-hub-plugin-template).\n",
"\n",
"If you would like to learn more about the ZenML Hub in general, check out the\n",
"[ZenML Hub Documentation](https://docs.zenml.io/user-guide/advanced-guide/leverage-community-contributed-plugins) or the [ZenML Hub Launch Blog Post](https://blog.zenml.io/zenml-hub-launch)."
"[ZenML Hub Documentation](https://docs.zenml.io/user-guide/advanced-guide/environment-management/use-the-hub) or the [ZenML Hub Launch Blog Post](https://blog.zenml.io/zenml-hub-launch)."
]
}
],
Expand Down
4 changes: 2 additions & 2 deletions orbit-user-analysis/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,9 +65,9 @@ python run.py

It is much more ideal to run a pipeline such as the
`community_analysis_pipeline` on a regular schedule. In order to achieve that,
you have to [deploy ZenML](https://docs.zenml.io/platform-guide/set-up-your-mlops-platform/deploy-zenml)
you have to [deploy ZenML](https://docs.zenml.io/user-guide/starter-guide/switch-to-production)
and set up a stack that supports
[our scheduling feature](https://docs.zenml.io/user-guide/advanced-guide/schedule-pipeline-runs).
[our scheduling feature](https://docs.zenml.io/user-guide/advanced-guide/pipelining-features/schedule-pipeline-runs).
Please check [our docs](https://docs.zenml.io/getting-started/introduction)
for more information.

Expand Down
11 changes: 5 additions & 6 deletions sign-language-detection-yolov5/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ In order to build a model that can detect and recognize the American Sign Langua

1. Download the dataset from [Roboflow](https://public.roboflow.com/object-detection/american-sign-language-alphabet)
2. Augment the training and valdiation sets using [Albumentations](https://albumentations.ai/)
3. Train the model using a pretrained model from [Yolov5](https://github.com/ultralytics/yolov5) while tracking the hyperparameters and metrics using [MLflow](https://docs.zenml.io/user-guide/component-guide/experiment-trackers/mlflow) within a GPU environment by leveraging [Google's Vertex AI Step Operator](https://docs.zenml.io/user-guide/component-guide/orchestrators/vertex) stack component.
3. Train the model using a pretrained model from [Yolov5](https://github.com/ultralytics/yolov5) while tracking the hyperparameters and metrics using [MLflow](https://docs.zenml.io/stacks-and-components/component-guide/experiment-trackers/mlflow) within a GPU environment by leveraging [Google's Vertex AI Step Operator](https://docs.zenml.io/stacks-and-components/component-guide/orchestrators/vertex) stack component.
4. Load the model in a different pipeline that deploys the model using [BentoML](https://www.bentoml.com/) and the provided ZenML integration.
5. Create an inference pipeline that will use the deployed model to detect and recognize the American Sign Language alphabet in test images from the first pipeline.

Expand All @@ -32,18 +32,18 @@ installed on your local machine:
* [Docker](https://www.docker.com/)
* [GCloud CLI](https://cloud.google.com/sdk/docs/install) (authenticated)
* [MLFlow Tracking Server](https://mlflow.org/docs/latest/tracking.html#mlflow-tracking-servers) (deployed remotely)
* [Remote ZenML Server](https://docs.zenml.io/platform-guide/set-up-your-mlops-platform/deploy-zenml): a Remote Deployment of the ZenML HTTP server and database
* [Remote ZenML Server](https://docs.zenml.io/user-guide/starter-guide/switch-to-production): a Remote Deployment of the ZenML HTTP server and database

### :rocket: Remote ZenML Server

For advanced use cases where we have a remote orchestrator or step operators such as Vertex AI
or to share stacks and pipeline information with a team we need to have a separated non-local remote ZenML Server that can be accessible from your
machine as well as all stack components that may need access to the server.
[Read more information about the use case here](https://docs.zenml.io/platform-guide/set-up-your-mlops-platform/deploy-zenml)
[Read more information about the use case here](https://docs.zenml.io/user-guide/starter-guide/switch-to-production)

In order to achieve this there are two different ways to get access to a remote ZenML Server.

1. Deploy and manage the server manually on [your own cloud](https://docs.zenml.io/platform-guide/set-up-your-mlops-platform/deploy-zenml)/
1. Deploy and manage the server manually on [your own cloud](https://docs.zenml.io/user-guide/starter-guide/switch-to-production)/
2. Sign up for [ZenML Enterprise](https://zenml.io/pricing) and get access to a hosted
version of the ZenML Server with no setup required.

Expand Down Expand Up @@ -270,8 +270,7 @@ The Inference pipeline is made up of the following steps:

# 📜 References

- Documentation on [Step Operators](https://docs.zenml.io/user-guide/component-guide/step-operators)
- Example of [Step Operators](https://github.com/zenml-io/zenml/tree/main/examples/step_operator_remote_training)
- Documentation on [Step Operators](https://docs.zenml.io/stacks-and-components/component-guide/step-operators)
- More on [Step Operators](https://blog.zenml.io/step-operators-training/)
- Documentation on how to create a GCP [service account](https://cloud.google.com/docs/authentication/getting-started#create-service-account-gcloud)
- ZenML CLI [documentation](https://apidocs.zenml.io/latest/cli/)
8 changes: 4 additions & 4 deletions supabase-openai-summary/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,8 @@ pip install -r src/requirements.txt
## Connect to Your Deployed ZenML

In order to run a ZenML pipeline remotely (e.g. on the cloud), we first need to
[deploy ZenML](https://docs.zenml.io/platform-guide/set-up-your-mlops-platform/deploy-zenml). One of the
easiest ways to do this is to [deploy ZenML with HuggingFace spaces](https://docs.zenml.io/platform-guide/set-up-your-mlops-platform/deploy-zenml/deploy-using-huggingface-spaces).
[deploy ZenML](https://docs.zenml.io/user-guide/starter-guide/switch-to-production). One of the
easiest ways to do this is to [deploy ZenML with HuggingFace spaces](https://docs.zenml.io/deploying-zenml/zenml-self-hosted/deploy-using-huggingface-spaces).

Afterward, establish a connection with your deployed ZenML instance:

Expand Down Expand Up @@ -67,7 +67,7 @@ You can also modify the preset prompts and system inputs in the [`generate_summa

## Run the Pipeline on a Remote Stack with Alerter

To run the pipeline on a remote stack with [an artifact store](https://docs.zenml.io/user-guide/component-guide/artifact-stores) and [a Slack alerter](https://docs.zenml.io/user-guide/component-guide/alerters/slack), follow these steps:
To run the pipeline on a remote stack with [an artifact store](https://docs.zenml.io/stacks-and-components/component-guide/artifact-stores) and [a Slack alerter](https://docs.zenml.io/stacks-and-components/component-guide/alerters/slack), follow these steps:

1. Install the GCP and Slack integrations for ZenML:

Expand Down Expand Up @@ -97,7 +97,7 @@ Once the stack is registered and set active, the pipeline will run on the remote

## Running in production: Choose your MLOps stack

ZenML simplifies scaling this pipeline by allowing seamless deployment on production-ready orchestrators like [Airflow](https://docs.zenml.io/user-guide/component-guide/orchestrators/airflow) or [Kubeflow](https://docs.zenml.io/user-guide/component-guide/orchestrators/kubeflow). With [native versioning on cloud storage](https://docs.zenml.io/user-guide/starter-guide/cache-previous-executions) and experiment tracking through ZenML's integration with [MLflow](https://docs.zenml.io/user-guide/component-guide/experiment-trackers/mlflow), you can start locally and effortlessly transition to robust and efficient MLOps pipelines in production, unlocking valuable insights from your enterprise data.
ZenML simplifies scaling this pipeline by allowing seamless deployment on production-ready orchestrators like [Airflow](https://docs.zenml.io/stacks-and-components/component-guide/orchestrators/airflow) or [Kubeflow](https://docs.zenml.io/stacks-and-components/component-guide/orchestrators/kubeflow). With [native versioning on cloud storage](https://docs.zenml.io/user-guide/starter-guide/cache-previous-executions) and experiment tracking through ZenML's integration with [MLflow](https://docs.zenml.io/stacks-and-components/component-guide/experiment-trackers/mlflow), you can start locally and effortlessly transition to robust and efficient MLOps pipelines in production, unlocking valuable insights from your enterprise data.
## Example: Automate Pipeline Execution with GitHub Actions
Expand Down
6 changes: 2 additions & 4 deletions time-series-forecast/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@ docker push gcr.io/<PROJECT-ID>/busybox
Note that you may need to run `gcloud auth configure-docker` in order to
authenticate your local `docker` cli with your GCP container registry and in
order for the `docker push...` command to work. [See our
documentation](https://docs.zenml.io/user-guide/component-guide/container-registries/gcp)
documentation](https://docs.zenml.io/stacks-and-components/component-guide/container-registries/gcp)
for more information on making this work.

### 6. [Enable](https://console.cloud.google.com/marketplace/product/google/aiplatform.googleapis.com?q=search&referrer=search&project=cloudguru-test-project) `Vertex AI API`
Expand Down Expand Up @@ -276,9 +276,7 @@ python main.py

# 📜 References

Documentation on [Step Operators](https://docs.zenml.io/user-guide/component-guide/step-operators)

Example of [Step Operators](https://github.com/zenml-io/zenml/tree/main/examples/step_operator_remote_training)
Documentation on [Step Operators](https://docs.zenml.io/stacks-and-components/component-guide/step-operators)

More on [Step Operators](https://blog.zenml.io/step-operators-training/)

Expand Down
14 changes: 7 additions & 7 deletions zen-news-summarization/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,16 +127,16 @@ and use the `VertexOrchestrator` to schedule the pipeline.

Before you start building the stack, you need to deploy ZenML on GCP. For more
information on how you can achieve do that, please check
[the corresponding docs page](https://docs.zenml.io/platform-guide/set-up-your-mlops-platform/deploy-zenml).
[the corresponding docs page](https://docs.zenml.io/user-guide/starter-guide/switch-to-production).

## ZenNews Stack

Once the ZenML is deployed, we can start to build up our stack. Our stack will
consist of the following components:

- [GCP Container Registry](https://docs.zenml.io/user-guide/component-guide/container-registries/gcp)
- [GCS Artifact Store](https://docs.zenml.io/user-guide/component-guide/artifact-stores/gcp)
- [Vertex Orchestrator](https://docs.zenml.io/user-guide/component-guide/orchestrators/vertex)
- [GCP Container Registry](https://docs.zenml.io/stacks-and-components/component-guide/container-registries/gcp)
- [GCS Artifact Store](https://docs.zenml.io/stacks-and-components/component-guide/artifact-stores/gcp)
- [Vertex Orchestrator](https://docs.zenml.io/stacks-and-components/component-guide/orchestrators/vertex)
- [Discord Alerter (part of the `zennews` package)](src/zennews/alerter/discord_alerter.py)

Let's start by installing the `gcp` integration:
Expand All @@ -148,7 +148,7 @@ zenml integration install gcp
### Container Registry
The first component is a
[GCP container registry](https://docs.zenml.io/user-guide/component-guide/container-registries/gcp).
[GCP container registry](https://docs.zenml.io/stacks-and-components/component-guide/container-registries/gcp).
Similar to the previous component, you just need to provide a name and the
URI to your container registry on GCP.
Expand All @@ -161,7 +161,7 @@ zenml container-registry register <CONTAINER_REGISTRY_NAME> \
### Artifact Store
The next component on the list is a
[GCS artifact store](https://docs.zenml.io/user-guide/component-guide/artifact-stores/gcp).
[GCS artifact store](https://docs.zenml.io/stacks-and-components/component-guide/artifact-stores/gcp).
In order to register it, all you have to do is to provide the path to your GCS
bucket:
Expand All @@ -174,7 +174,7 @@ zenml artifact-store register <ARTIFACT_STORE_NAME> \
### Orchestrator
Following the artifact store, we will register a
[Vertex AI orchestrator.](https://docs.zenml.io/user-guide/component-guide/orchestrators/vertex)
[Vertex AI orchestrator.](https://docs.zenml.io/stacks-and-components/component-guide/orchestrators/vertex)
```bash
zenml orchestrator register <ORCHESTRATOR_NAME> \
Expand Down

0 comments on commit 6372929

Please sign in to comment.