Skip to content

Commit

Permalink
only 208 broken links left
Browse files Browse the repository at this point in the history
  • Loading branch information
strickvl committed Sep 25, 2024
1 parent 36fd2a0 commit fd8daa3
Show file tree
Hide file tree
Showing 131 changed files with 305 additions and 305 deletions.
10 changes: 5 additions & 5 deletions docs/mintlify/getting-started/core-concepts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@ icon: wand-magic-sparkles

**ZenML** is an extensible, open-source MLOps framework for creating portable, production-ready **MLOps pipelines**. It's built for data scientists, ML Engineers, and MLOps Developers to collaborate as they develop to production. In order to achieve this goal, ZenML introduces various concepts for different aspects of an ML workflow and we can categorize these concepts under three different threads:
<CardGroup cols={3}>
<Card title="1. Development" icon="code" href="/versions/0.66.0/getting-started/core-concepts#1-development">
<Card title="1. Development" icon="code" href="/getting-started/core-concepts#1-development">
As a developer, how do I design my machine learning workflows?
</Card>
<Card title="2. Execution" icon="play" href="/versions/0.66.0/getting-started/core-concepts#2-execution">
<Card title="2. Execution" icon="play" href="/getting-started/core-concepts#2-execution">
While executing, how do my workflows utilize the large landscape of MLOps tooling/infrastructure?
</Card>
<Card title="3. Management" icon="list-check" href="/versions/0.66.0/getting-started/core-concepts#3-management">
<Card title="3. Management" icon="list-check" href="/getting-started/core-concepts#3-management">
How do I establish and maintain a production-grade and efficient solution?
</Card>
</CardGroup>
Expand Down Expand Up @@ -90,7 +90,7 @@ if __name__ == "__main__":
Artifacts represent the data that goes through your steps as inputs and outputs and they are automatically tracked and stored by ZenML in the artifact store. They are produced by and circulated among steps whenever your step returns an object or a value. This means the data is not passed between steps in memory. Rather, when the execution of a step is completed they are written to storage, and when a new step gets executed they are loaded from storage.
The serialization and deserialization logic of artifacts is defined by [Materializers](/versions/0.66.0/how-to/handle-data-artifacts/handle-custom-data-types).
The serialization and deserialization logic of artifacts is defined by [Materializers](/usage/resource-data-management/handle-data-artifacts/handle-custom-data-types).
#### Models
Expand All @@ -100,7 +100,7 @@ Models are used to represent the outputs of a training process along with all me
Materializers define how artifacts live in between steps. More precisely, they define how data of a particular type can be serialized/deserialized, so that the steps are able to load the input data and store the output data.
All materializers use the base abstraction called the `BaseMaterializer` class. While ZenML comes built-in with various implementations of materializers for different datatypes, if you are using a library or a tool that doesn't work with our built-in options, you can write [your own custom materializer](/versions/0.66.0/how-to/handle-data-artifacts/handle-custom-data-types) to ensure that your data can be passed from step to step.
All materializers use the base abstraction called the `BaseMaterializer` class. While ZenML comes built-in with various implementations of materializers for different datatypes, if you are using a library or a tool that doesn't work with our built-in options, you can write [your own custom materializer](/usage/resource-data-management/handle-data-artifacts/handle-custom-data-types) to ensure that your data can be passed from step to step.
#### Parameters & Settings
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ icon: image

In most cases, deploying ZenML with the default `zenmlhub/zenml-server` Docker image should work just fine. However, there are some scenarios when you might need to deploy ZenML with a custom Docker image:

* You have implemented a custom artifact store for which you want to enable [artifact visualizations](https://github.com/zenml-io/zenml/blob/release/0.66.0/docs/book/versions/0.66.0/how-to/handle-data-artifacts/visualize-artifacts.md) or [step logs](/versions/0.66.0/how-to/setting-up-a-project-repository/best-practices#logging) in your dashboard.
* You have implemented a custom artifact store for which you want to enable [artifact visualizations](https://github.com/zenml-io/zenml/blob/release/0.66.0/docs/book/usage/resource-data-management/handle-data-artifacts/visualize-artifacts.md) or [step logs](/usage/project-setup/setting-up-a-project-repository/best-practices#logging) in your dashboard.
* You have forked the ZenML repository and want to deploy a ZenML server based on your own fork because you made changes to the server / database logic.

<Note>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Before we begin, it will help to understand the [architecture](/versions/0.66.0/
If you don't have an existing Kubernetes cluster, you have the following two options to set it up:

* Creating it manually using the documentation for your cloud provider. For convenience, here are links for [AWS](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html), [Azure](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-portal?tabs=azure-cli), and [GCP](https://cloud.google.com/kubernetes-engine/docs/versions/0.66.0/how-to/creating-a-zonal-cluster#before%5Fyou%5Fbegin).
* Using a [stack recipe](/versions/0.66.0/how-to/stack-deployment/deploy-a-stack-using-mlstacks) that sets up a cluster along with other tools that you might need in your cloud stack like artifact stores and secret managers. Take a look at all [available stack recipes](https://github.com/zenml-io/mlstacks) to see if there's something that works for you.
* Using a [stack recipe](/stack-components/stack-deployment/deploy-a-stack-using-mlstacks) that sets up a cluster along with other tools that you might need in your cloud stack like artifact stores and secret managers. Take a look at all [available stack recipes](https://github.com/zenml-io/mlstacks) to see if there's something that works for you.

<Note>
Once you have created your cluster, make sure that you configure your [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) client to talk to it.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ This scenario is meant for customers who want to quickly get started with ZenML
<img src="/_assets/sys-a-2.avif"/>
</Frame>

This scenario is a version of Scenario 1\. modified to store all sensitive information on the customer side. In this case, the customer connects their own secret store directly to the ZenML server that is managed by us. All ZenML secrets used by running pipelines to access infrastructure services and resources are stored in the customer secret store. This allows users to use [service connectors](/usage/resource-data-management/auth-management/service-connectors-guide) and the [secrets API](/versions/0.66.0/how-to/interact-with-secrets) to authenticate ZenML pipelines and the ZenML Pro to 3rd party services and infrastructure while ensuring that credentials are always stored on the customer side.
This scenario is a version of Scenario 1\. modified to store all sensitive information on the customer side. In this case, the customer connects their own secret store directly to the ZenML server that is managed by us. All ZenML secrets used by running pipelines to access infrastructure services and resources are stored in the customer secret store. This allows users to use [service connectors](/usage/resource-data-management/auth-management/service-connectors-guide) and the [secrets API](/usage/project-setup/use-secrets/interact-with-secrets) to authenticate ZenML pipelines and the ZenML Pro to 3rd party services and infrastructure while ensuring that credentials are always stored on the customer side.

Even though they are stored customer side, access to ZenML secrets is fully managed by ZenML Pro. The individually deployed ZenML Servers can also allowed to use some of those credentials to connect directly to customer infrastructure services to implement control plane features such as artifact visualization or triggering pipelines. This implies that the secret values are allowed to leave the customer environment to allow their access to be managed centrally by the ZenML Pro and to enforce access control policies, but the ZenML users and pipelines never have direct access to the secret store.

Expand Down
2 changes: 1 addition & 1 deletion docs/mintlify/getting-started/faq.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ This is a known issue with how forking works on Macs running on Apple Silicon an

</Accordion>
<Accordion title="How can I make ZenML work with my custom tool? How can I extend or build on ZenML?">
This depends on the tool and its respective MLOps category. We have a full guide on this over [here](/versions/0.66.0/how-to/stack-deployment/implement-a-custom-stack-component)!
This depends on the tool and its respective MLOps category. We have a full guide on this over [here](/stack-components/stack-deployment/implement-a-custom-stack-component)!
</Accordion>
<Accordion title="How can I contribute?">
We develop ZenML together with our community! To get involved, the best way to get started is to select any issue from the [good-first-issue label](https://github.com/zenml-io/zenml/labels/good%20first%20issue). If you would like to contribute, please review our [Contributing Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) for all relevant details.
Expand Down
6 changes: 3 additions & 3 deletions docs/mintlify/getting-started/introduction.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ ZenML integrates seamlessly with many popular open-source tools, so you can also
Ready to develop production-ready code with ZenML? Here is a collection of pages you can take a look at next:
<CardGroup cols={3}>

<Card title="Core Concepts" icon="trowel-bricks" href="/versions/0.66.0/getting-started/core-concepts">
<Card title="Core Concepts" icon="trowel-bricks" href="/getting-started/core-concepts">
Understand the core concepts behind ZenML.
</Card>

Expand All @@ -115,7 +115,7 @@ Build your first ZenML pipeline and deploy it in the cloud.
<Tab title="For ML Engineers">
ZenML empowers ML engineers to take ownership of the entire ML lifecycle end-to-end. Adopting ZenML means fewer handover points and more visibility on what is happening in your organization.

* **ML Lifecycle Management:** ZenML's abstractions enable you to manage sophisticated ML setups with ease. After you define your ML workflows as [Pipelines](/versions/0.66.0/getting-started/core-concepts#1-development) and your development, staging, and production infrastructures as [Stacks](/versions/0.66.0/getting-started/core-concepts#2-execution), you can move entire ML workflows to different environments in seconds.
* **ML Lifecycle Management:** ZenML's abstractions enable you to manage sophisticated ML setups with ease. After you define your ML workflows as [Pipelines](/getting-started/core-concepts#1-development) and your development, staging, and production infrastructures as [Stacks](/getting-started/core-concepts#2-execution), you can move entire ML workflows to different environments in seconds.
```Bash
zenml stack set staging
python run.py # test your workflows on staging infrastructure
Expand Down Expand Up @@ -150,7 +150,7 @@ Ready to manage your ML lifecycles end-to-end with ZenML? Here is a collection o
Get started with ZenML and learn how to build your first pipeline and stack.
</Card>

<Card title="How To" icon="earlybirds" href="/versions/0.66.0/how-to/build-pipelines">
<Card title="How To" icon="earlybirds" href="/usage/pipeline/build-pipelines">
Discover advanced ZenML features like config management and containerization.
</Card>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ def my_pipeline():



Check out [this page](/versions/0.66.0/how-to/build-pipelines/use-pipeline-step-parameters) for more information on how to parameterize your steps.
Check out [this page](/usage/pipelines/build-pipelines/use-pipeline-step-parameters) for more information on how to parameterize your steps.

## Calling a step outside of a pipeline
<Tabs>
Expand Down Expand Up @@ -456,7 +456,7 @@ my_pipeline()



Check out [this page](/versions/0.66.0/how-to/build-pipelines/schedule-a-pipeline) for more information on how to schedule your pipelines.
Check out [this page](/usage/pipelines/build-pipelines/schedule-a-pipeline) for more information on how to schedule your pipelines.

## Fetching pipelines after execution

Expand Down Expand Up @@ -504,7 +504,7 @@ loaded_model = model.load()
</Tabs>


Check out [this page](/versions/0.66.0/how-to/track-metrics-metadata/fetch-metadata-within-steps) for more information on how to programmatically fetch information about previous pipeline runs.
Check out [this page](/usage/resource-data-management/track-metrics-metadata/fetch-metadata-within-steps) for more information on how to programmatically fetch information about previous pipeline runs.

## Controlling the step execution order

Expand Down Expand Up @@ -550,7 +550,7 @@ def my_pipeline():
</Tabs>


Check out [this page](/versions/0.66.0/how-to/build-pipelines/control-execution-order-of-steps) for more information on how to control the step execution order.
Check out [this page](/usage/pipelines/build-pipelines/control-execution-order-of-steps) for more information on how to control the step execution order.

## Defining steps with multiple outputs
<Tabs>
Expand Down Expand Up @@ -607,7 +607,7 @@ def my_step() -> Tuple[
</Tabs>


Check out [this page](/versions/0.66.0/how-to/build-pipelines/step-output-typing-and-annotation) for more information on how to annotate your step outputs.
Check out [this page](/usage/pipelines/build-pipelines/step-output-typing-and-annotation) for more information on how to annotate your step outputs.

## Accessing run information inside steps

Expand Down Expand Up @@ -656,5 +656,5 @@ def my_step() -> Any: # New: StepContext is no longer an argument of the step
</Tabs>


Check out [this page](/versions/0.66.0/how-to/track-metrics-metadata/fetch-metadata-within-steps) for more information on how to fetch run information inside your steps using `get_step_context()`.
Check out [this page](/usage/resource-data-management/track-metrics-metadata/fetch-metadata-within-steps) for more information on how to fetch run information inside your steps using `get_step_context()`.

Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ High-level overview of the changes:

## ZenML takes over the Metadata Store role

ZenML can now run [as a server](https://github.com/zenml-io/zenml/blob/release/0.66.0/docs/book/versions/0.66.0/user-guide/versions/0.66.0/getting-started/core-concepts.md#zenml-server-and-dashboard) that can be accessed via a REST API and also comes with a visual user interface (called the ZenML Dashboard). This server can be deployed in arbitrary environments (local, on-prem, via Docker, on AWS, GCP, Azure etc.) and supports user management, workspace scoping, and more.
ZenML can now run [as a server](https://github.com/zenml-io/zenml/blob/release/0.66.0/docs/book/versions/0.66.0/user-guide/getting-started/core-concepts.md#zenml-server-and-dashboard) that can be accessed via a REST API and also comes with a visual user interface (called the ZenML Dashboard). This server can be deployed in arbitrary environments (local, on-prem, via Docker, on AWS, GCP, Azure etc.) and supports user management, workspace scoping, and more.

The release introduces a series of commands to facilitate managing the lifecycle of the ZenML server and to access the pipeline and pipeline run information:

Expand Down Expand Up @@ -398,7 +398,7 @@ The `zenml profile migrate` CLI command also provides command line flags for cas

Stack components can now be registered without having the required integrations installed. As part of this change, we split all existing stack component definitions into three classes: an implementation class that defines the logic of the stack component, a config class that defines the attributes and performs input validations, and a flavor class that links implementation and config classes together. See [**component flavor models #895**](https://github.com/zenml-io/zenml/pull/895) for more details.

If you are only using stack component flavors that are shipped with the zenml Python distribution, this change has no impact on the configuration of your existing stacks. However, if you are currently using custom stack component implementations, you will need to update them to the new format. See the [documentation on writing custom stack component flavors](/versions/0.66.0/how-to/stack-deployment/implement-a-custom-stack-component) for updated information on how to do this.
If you are only using stack component flavors that are shipped with the zenml Python distribution, this change has no impact on the configuration of your existing stacks. However, if you are currently using custom stack component implementations, you will need to update them to the new format. See the [documentation on writing custom stack component flavors](/stack-components/stack-deployment/implement-a-custom-stack-component) for updated information on how to do this.

## Shared ZenML Stacks and Stack Components

Expand Down Expand Up @@ -455,7 +455,7 @@ With ZenML 0.20.0, we introduce the `BaseSettings` class, a broad class that ser

Pipelines and steps now allow all configurations on their decorators as well as the `.configure(...)` method. This includes configurations for stack components that are not infrastructure-related which was previously done using the `@enable_xxx` decorators). The same configurations can also be defined in a YAML file.

Read more about this paradigm in the [new docs section about settings](/versions/0.66.0/how-to/use-configuration-files/what-can-be-configured).
Read more about this paradigm in the [new docs section about settings](/usage/project-setup/use-configuration-files/what-can-be-configured).

Here is a list of changes that are the most obvious in consequence of the above code. Please note that this list is not exhaustive, and if we have missed something let us know via [Slack](https://zenml.io/slack).

Expand Down
6 changes: 3 additions & 3 deletions docs/mintlify/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -423,7 +423,7 @@
"group": "🧩 Stacks",
"pages": [
"stack-components/component-guide",
"stack-components/deploying-stacks",
"stack-components/stack-deployment/deploying-stacks",
{
"group": " Orchestrators",
"icon": "battery-full",
Expand Down Expand Up @@ -602,8 +602,8 @@
"icon": "hammer",
"iconType": "solid",
"pages": [
"stack-components/implement-a-custom-stack-component",
"stack-components/implement-a-custom-integration"
"stack-components/stack-deployment/implement-a-custom-stack-component",
"stack-components/stack-deployment/implement-a-custom-integration"
]
},
{
Expand Down
Loading

0 comments on commit fd8daa3

Please sign in to comment.