Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add structure for remaining guides content #26423

Merged
merged 91 commits into from
Jan 2, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
91 commits
Select commit Hold shift + click to select a range
33e20c9
reorganize agents section
neverett Dec 5, 2024
6349714
Merge branch 'master' into nikki/docs/add-skeleton
neverett Dec 5, 2024
aea0801
another pass at reorganizing hybrid deployments section
neverett Dec 5, 2024
a6327ec
add atructure for SCIM section
neverett Dec 5, 2024
f8c91e9
clean up alerts section
neverett Dec 5, 2024
013f2aa
more cleanup of alerts docs
neverett Dec 10, 2024
53bc785
reorganize hybrid agents docs
neverett Dec 10, 2024
3a14297
clean up branch deployments docs
neverett Dec 10, 2024
88feff1
fix link
neverett Dec 10, 2024
23cf134
Merge branch 'master' into nikki/docs/add-skeleton
neverett Dec 10, 2024
0393bf4
add stub page for dagster_cloud.yaml reference
neverett Dec 10, 2024
7327fbb
finish adding pages for amazon ECS agent
neverett Dec 10, 2024
3ebe1f5
clean up branch deployments
neverett Dec 10, 2024
cf3f7f9
add dagster-cloud cli reference page
neverett Dec 10, 2024
f21802c
turn dagster-cloud CLI into section
neverett Dec 10, 2024
69fb74f
clean up env variables section
neverett Dec 10, 2024
55fbe05
clean up rest of deployment section, move settings pages to dedicated…
neverett Dec 10, 2024
b2f1bf5
fix sidebar position
neverett Dec 10, 2024
f35e7ce
clean up tokens docs and multi-tenancy doc
neverett Dec 10, 2024
58e2326
update lots of deployment docs
neverett Dec 11, 2024
f732821
finalize alerts section
neverett Dec 11, 2024
4b116fa
put branch deployments in ci/cd section
neverett Dec 11, 2024
062348a
rename catalog views page
neverett Dec 11, 2024
3671a65
unlist and add notes to branch deployments pages
neverett Dec 11, 2024
2793680
unlist ci/cd file reference page
neverett Dec 11, 2024
9c06a38
move deployment types docs to new section and clean up metadata and a…
neverett Dec 11, 2024
44e86ca
move serverless to hybrid doc into migration section, update metadata
neverett Dec 11, 2024
4dc7139
multi tenant deployment doc moved
neverett Dec 11, 2024
da06af2
create new deployment management section, update page metadata, add n…
neverett Dec 11, 2024
ca4ca6c
move code locations section to features, update metadata, add links
neverett Dec 11, 2024
bbd5a5e
this has been moved
neverett Dec 11, 2024
02cdf74
update metadata, add TODO notes for copying old content
neverett Dec 11, 2024
787e6cc
update metadata, fix links
neverett Dec 11, 2024
57fe517
update metadata
neverett Dec 11, 2024
afd73c3
update metadata
neverett Dec 11, 2024
8b57ea6
update metadata, add TODO links for moving old content
neverett Dec 11, 2024
fa53b96
update metadata
neverett Dec 11, 2024
5c85322
update metadata
neverett Dec 11, 2024
40623b7
Merge branch 'master' into nikki/docs/add-skeleton
neverett Dec 11, 2024
1c1f6f5
mdx extension doesn't seem necessary here
neverett Dec 11, 2024
d598c13
fix links
neverett Dec 11, 2024
b130780
fix links
neverett Dec 11, 2024
60e6aca
more links
neverett Dec 11, 2024
dd8d1ff
more links
neverett Dec 11, 2024
60db1da
one last link
neverett Dec 11, 2024
96792cb
Merge branch 'master' into nikki/docs/add-skeleton
neverett Dec 11, 2024
624e450
more metadata cleanup
neverett Dec 11, 2024
405c52c
appease graphite
neverett Dec 11, 2024
9f22aa8
restructure build category and clean up metadata
neverett Dec 12, 2024
51d9d3a
clean up automate category
neverett Dec 12, 2024
3d37dc6
add operate category
neverett Dec 12, 2024
e2a2158
restructure logging category and clean up metadata
neverett Dec 12, 2024
89fc5ab
clean up deploy category
neverett Dec 12, 2024
f03db0d
clean up getting started docs metadata
neverett Dec 12, 2024
bf43ac3
clean up test category metadata
neverett Dec 12, 2024
57ff1a1
remove unnecessary guides index page
neverett Dec 12, 2024
1b4a76a
add operate category to sidebar config
neverett Dec 12, 2024
f17305e
fix merge conflict
neverett Dec 18, 2024
7012669
fix links
neverett Dec 18, 2024
9cdea68
add outline for airflow to dagster migration guide
neverett Dec 19, 2024
8a8a266
add index page for operate section
neverett Dec 19, 2024
855302f
update section index pages, fix dagster-yaml front matter
neverett Dec 19, 2024
6234cbe
add pipes section
neverett Dec 19, 2024
f52b9fa
rename reference doc
neverett Dec 19, 2024
7e8983d
reorganize pipes section
neverett Dec 19, 2024
435b0b3
more pipes
neverett Dec 19, 2024
965c79d
move assets docs to assets section
neverett Dec 19, 2024
84f7211
reorganize build section
neverett Dec 19, 2024
03b919f
move some docs to operate section
neverett Dec 19, 2024
0896fab
fix links
neverett Dec 19, 2024
59419ac
more links
neverett Dec 19, 2024
b731556
more links
neverett Dec 19, 2024
a619f62
add partitions and backfills section
neverett Dec 19, 2024
cbf09c4
more link fixes
neverett Dec 20, 2024
56bb93d
fix OSS deployment link
neverett Dec 20, 2024
5d909f4
ugh more links
neverett Dec 20, 2024
ec5c6ab
one last link fix
neverett Dec 20, 2024
81e054a
add outline for automate section
neverett Dec 20, 2024
d45721c
Merge branch 'master' into nikki/docs/guides-structure
neverett Dec 20, 2024
f65a6cb
Merge branch 'master' into nikki/docs/guides-structure
neverett Dec 20, 2024
7775cab
Merge branch 'master' into nikki/docs/guides-structure
neverett Dec 20, 2024
c733176
move about assets structure into index page
neverett Dec 20, 2024
fef7885
Merge branch 'master' into nikki/docs/guides-structure
neverett Dec 20, 2024
54c17c9
Merge branch 'master' into nikki/docs/guides-structure
neverett Dec 31, 2024
8de988d
Merge branch 'master' into nikki/docs/guides-structure
neverett Dec 31, 2024
e743c88
mark empty articles unlisted
neverett Dec 31, 2024
57b2c1b
fix article name
neverett Dec 31, 2024
93aaf0d
fix some vale errors
neverett Dec 31, 2024
e033a19
Merge branch 'master' into nikki/docs/guides-structure
neverett Jan 2, 2025
a102a48
merge master and fix conflict
neverett Jan 2, 2025
2ed9118
fix conflict
neverett Jan 2, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,11 @@ The default I/O manager cannot be used if you are a Serverless user who:
- Are otherwise working with data subject to GDPR or other such regulations
:::

In Serverless, code that uses the default [I/O manager](/guides/build/configure/io-managers) is automatically adjusted to save data in Dagster+ managed storage. This automatic change is useful because the Serverless filesystem is ephemeral, which means the default I/O manager wouldn't work as expected.
In Serverless, code that uses the default [I/O manager](/guides/operate/io-managers) is automatically adjusted to save data in Dagster+ managed storage. This automatic change is useful because the Serverless filesystem is ephemeral, which means the default I/O manager wouldn't work as expected.

However, this automatic change also means potentially sensitive data could be **stored** and not just processed or orchestrated by Dagster+.

To prevent this, you can use [another I/O manager](/guides/build/configure/io-managers#built-in) that stores data in your infrastructure or [adapt your code to avoid using an I/O manager](/guides/build/configure/io-managers#before-you-begin).
To prevent this, you can use [another I/O manager](/guides/operate/io-managers#built-in) that stores data in your infrastructure or [adapt your code to avoid using an I/O manager](/guides/operate/io-managers#before-you-begin).

:::note
You must have [boto3](https://pypi.org/project/boto3/) or `dagster-cloud[serverless]` installed as a project dependency otherwise the Dagster+ managed storage can fail and silently fall back to using the default I/O manager.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -132,4 +132,4 @@ compute_logs:
ServerSideEncryption: "AES256"
show_url_only: true
region: "us-west-1"
```
```
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ TODO: add picture previously at "/images/dagster-cloud/user-token-management/cod
| Start and stop [schedules](/guides/automate/schedules) | ❌ | ❌ | ✅ | ✅ | ✅ |
| Start and stop [schedules](/guides/automate/sensors) | ❌ | ❌ | ✅ | ✅ | ✅ |
| Wipe assets | ❌ | ❌ | ✅ | ✅ | ✅ |
| Launch and cancel [schedules](/guides/build/backfill) | ❌ | ✅ | ✅ | ✅ | ✅ |
| Launch and cancel [schedules](/guides/automate/schedules) | ❌ | ✅ | ✅ | ✅ | ✅ |
| Add dynamic partitions | ❌ | ❌ | ✅ | ✅ | ✅ |

### Deployments
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ In this guide, we'll walk you through configuring [Okta SCIM provisioning](https
With Dagster+'s Okta SCIM provisioning feature, you can:

- **Create users**. Users that are assigned to the Dagster+ application in the IdP will be automatically added to your Dagster+ organization.
- **Update user attributes.** Updating a users name or email address in the IdP will automatically sync the change to your user list in Dagster+.
- **Update user attributes.** Updating a user's name or email address in the IdP will automatically sync the change to your user list in Dagster+.
- **Remove users.** Deactivating or unassigning a user from the Dagster+ application in the IdP will remove them from the Dagster+ organization
{/* - **Push user groups.** Groups and their members in the IdP can be pushed to Dagster+ as [Teams](/dagster-plus/account/managing-users/managing-teams). */}
- **Push user groups.** Groups and their members in the IdP can be pushed to Dagster+ as
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ In this guide, you'll learn how to create, access, and share catalog views with
<summary>Prerequisites</summary>

- **Organization Admin**, **Admin**, or **Editor** permissions on Dagster+
- Familiarity with [Assets](/guides/build/assets-concepts/index.mdx and [Asset metadata](/guides/build/create-a-pipeline/metadata)
- Familiarity with [Assets](/guides/build/create-asset-pipelines/assets-concepts/index.mdx and [Asset metadata](/guides/build/create-asset-pipelines/metadata)

</details>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ unlisted: true
This guide is applicable to Dagster+.
:::

Branch Deployments Change Tracking makes it eaiser for you and your team to identify how changes in a pull request will impact data assets. By the end of this guide, you'll understand how Change Tracking works and what types of asset changes can be detected.
Branch Deployments Change Tracking makes it easier for you and your team to identify how changes in a pull request will impact data assets. By the end of this guide, you'll understand how Change Tracking works and what types of asset changes can be detected.

## How it works

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,14 @@
This guide is applicable to Dagster+.
:::

This guide details a workflow to test Dagster code in your cloud environment without impacting your production data. To highlight this functionality, well leverage Dagster+ branch deployments and a Snowflake database to:
This guide details a workflow to test Dagster code in your cloud environment without impacting your production data. To highlight this functionality, we'll leverage Dagster+ branch deployments and a Snowflake database to:

- Execute code on a feature branch directly on Dagster+
- Read and write to a unique per-branch clone of our Snowflake data

With these tools, we can merge changes with confidence in the impact on our data platform and with the assurance that our code will execute as intended.

Here’s an overview of the main concepts well be using:
Here’s an overview of the main concepts we'll be using:

Check warning on line 18 in docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/testing.md

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Dagster.chars-non-standard-quotes] Use standard single quotes or double quotes only. Do not use left or right quotes. Raw Output: {"message": "[Dagster.chars-non-standard-quotes] Use standard single quotes or double quotes only. Do not use left or right quotes.", "location": {"path": "docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/testing.md", "range": {"start": {"line": 18, "column": 5}}}, "severity": "WARNING"}

{/* - [Assets](/concepts/assets/software-defined-assets) - We'll define three assets that each persist a table to Snowflake. */}
- [Assets](/todo) - We'll define three assets that each persist a table to Snowflake.
Expand All @@ -35,7 +35,7 @@
## Prerequisites

:::note
This guide is an extension of the <a href="/guides/dagster/transitioning-data-pipelines-from-development-to-production"> Transitioning data pipelines from development to production </a> guide, illustrating a workflow for staging deployments. Well use the examples from this guide to build a workflow atop Dagster+’s branch deployment feature.
This guide is an extension of the <a href="/guides/dagster/transitioning-data-pipelines-from-development-to-production"> Transitioning data pipelines from development to production </a> guide, illustrating a workflow for staging deployments. We'll use the examples from this guide to build a workflow atop Dagster+’s branch deployment feature.

Check warning on line 38 in docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/testing.md

View workflow job for this annotation

GitHub Actions / deploy

Do not use an `<a>` element to navigate. Use the `<Link />` component from `@docusaurus/Link` instead. See: https://docusaurus.io/docs/docusaurus-core#link
:::

To complete the steps in this guide, you'll need:
Expand All @@ -52,7 +52,7 @@

## Overview

We have a `PRODUCTION` Snowflake database with a schema named `HACKER_NEWS`. In our production cloud environment, wed like to write tables to Snowflake containing subsets of Hacker News data. These tables will be:
We have a `PRODUCTION` Snowflake database with a schema named `HACKER_NEWS`. In our production cloud environment, we'd like to write tables to Snowflake containing subsets of Hacker News data. These tables will be:

Check failure on line 55 in docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/testing.md

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Terms] Use 'we have' instead of 'We have'. Raw Output: {"message": "[Vale.Terms] Use 'we have' instead of 'We have'.", "location": {"path": "docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/testing.md", "range": {"start": {"line": 55, "column": 1}}}, "severity": "ERROR"}

- `ITEMS` - A table containing the entire dataset
- `COMMENTS` - A table containing data about comments
Expand Down Expand Up @@ -128,14 +128,14 @@

## Step 2: Configure our assets for each environment

At runtime, wed like to determine which environment our code is running in: branch deployment, or production. This information dictates how our code should execute, specifically with which credentials and with which database.
At runtime, we'd like to determine which environment our code is running in: branch deployment, or production. This information dictates how our code should execute, specifically with which credentials and with which database.

To ensure we can't accidentally write to production from within our branch deployment, well use a different set of credentials from production and write to our database clone.
To ensure we can't accidentally write to production from within our branch deployment, we'll use a different set of credentials from production and write to our database clone.

{/* Dagster automatically sets certain [environment variables](/dagster-plus/managing-deployments/reserved-environment-variables) containing deployment metadata, allowing us to read these environment variables to discern between deployments. We can access the `DAGSTER_CLOUD_IS_BRANCH_DEPLOYMENT` environment variable to determine the currently executing environment. */}
Dagster automatically sets certain [environment variables](/todo) containing deployment metadata, allowing us to read these environment variables to discern between deployments. We can access the `DAGSTER_CLOUD_IS_BRANCH_DEPLOYMENT` environment variable to determine the currently executing environment.

Because we want to configure our assets to write to Snowflake using a different set of credentials and database in each environment, well configure a separate I/O manager for each environment:
Because we want to configure our assets to write to Snowflake using a different set of credentials and database in each environment, we'll configure a separate I/O manager for each environment:

```python file=/guides/dagster/development_to_production/branch_deployments/repository_v1.py startafter=start_repository endbefore=end_repository
# definitions.py
Expand Down Expand Up @@ -232,7 +232,7 @@
drop_database_clone()
```

Weve defined `drop_database_clone` and `clone_production_database` to utilize the <PyObject object="SnowflakeResource" module="dagster_snowflake" />. The Snowflake resource will use the same configuration as the Snowflake I/O manager to generate a connection to Snowflake. However, while our I/O manager writes outputs to Snowflake, the Snowflake resource executes queries against Snowflake.
We've defined `drop_database_clone` and `clone_production_database` to utilize the <PyObject object="SnowflakeResource" module="dagster_snowflake" />. The Snowflake resource will use the same configuration as the Snowflake I/O manager to generate a connection to Snowflake. However, while our I/O manager writes outputs to Snowflake, the Snowflake resource executes queries against Snowflake.

We now need to define resources that configure our jobs to the current environment. We can modify the resource mapping by environment as follows:

Expand Down Expand Up @@ -322,7 +322,7 @@

Alternatively, the logs for the branch deployment workflow can be found in the **Actions** tab on the GitHub pull request.

We can also view our database in Snowflake to confirm that a clone exists for each branch deployment. When we materialize our assets within our branch deployment, well now be writing to our clone of `PRODUCTION`. Within Snowflake, we can run queries against this clone to confirm the validity of our data:
We can also view our database in Snowflake to confirm that a clone exists for each branch deployment. When we materialize our assets within our branch deployment, we'll now be writing to our clone of `PRODUCTION`. Within Snowflake, we can run queries against this clone to confirm the validity of our data:

![Instance overview](/images/guides/development_to_production/branch_deployments/snowflake.png)

Expand Down Expand Up @@ -383,7 +383,7 @@

![Instance overview](/images/guides/development_to_production/branch_deployments/instance_overview.png)

We can also view our database in Snowflake to confirm that a clone exists for each branch deployment. When we materialize our assets within our branch deployment, well now be writing to our clone of `PRODUCTION`. Within Snowflake, we can run queries against this clone to confirm the validity of our data:
We can also view our database in Snowflake to confirm that a clone exists for each branch deployment. When we materialize our assets within our branch deployment, we'll now be writing to our clone of `PRODUCTION`. Within Snowflake, we can run queries against this clone to confirm the validity of our data:

![Instance overview](/images/guides/development_to_production/branch_deployments/snowflake.png)

Expand Down Expand Up @@ -489,4 +489,4 @@

After merging our branch, viewing our Snowflake database will confirm that our branch deployment step has successfully deleted our database clone.

Weve now built an elegant workflow that enables future branch deployments to automatically have access to their own clones of our production database that are cleaned up upon merge!
We've now built an elegant workflow that enables future branch deployments to automatically have access to their own clones of our production database that are cleaned up upon merge!
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ You'll need one or more assets that emit the same metadata key at run time. Insi
are most valuable when you have multiple assets that emit the same kind of metadata, such as
such as the number of rows processed or the size of a file uploaded to object storage.

Follow [the metadata guide](/guides/build/create-a-pipeline/metadata#runtime-metadata) to add numeric metadata
Follow [the metadata guide](/guides/build/create-asset-pipelines/metadata#runtime-metadata) to add numeric metadata
to your asset materializations.

## Step 2: Enable viewing your metadata in Dagster+ Insights
Expand Down
2 changes: 1 addition & 1 deletion docs/docs-beta/docs/dagster-plus/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Dagster+ is a managed orchestration platform built on top of Dagster's open sour

Dagster+ is built to be the most performant, reliable, and cost effective way for data engineering teams to run Dagster in production. Dagster+ is also great for students, researchers, or individuals who want to explore Dagster with minimal overhead.

Dagster+ comes in two flavors: a fully [Serverless](/dagster-plus/deployment/deployment-types/serverless) offering and a [Hybrid](/dagster-plus/deployment/deployment-types/hybrid) offering. In both cases, Dagster+ does the hard work of managing your data orchestration control plane. Compared to a [Dagster open source deployment](/guides/), Dagster+ manages:
Dagster+ comes in two flavors: a fully [Serverless](/dagster-plus/deployment/deployment-types/serverless) offering and a [Hybrid](/dagster-plus/deployment/deployment-types/hybrid) offering. In both cases, Dagster+ does the hard work of managing your data orchestration control plane. Compared to a [Dagster open source deployment](guides/deploy/index.md), Dagster+ manages:

- Dagster's web UI at https://dagster.plus
- Metadata stores for data cataloging and cost insights
Expand Down
1 change: 0 additions & 1 deletion docs/docs-beta/docs/getting-started/glossary.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
---
title: Glossary
sidebar_position: 30
sidebar_label: Glossary
unlisted: true
---

Expand Down
4 changes: 1 addition & 3 deletions docs/docs-beta/docs/getting-started/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,6 @@ sidebar_position: 20
sidebar_label: Installation
---

# Installing Dagster

To follow the steps in this guide, you'll need:

- To install Python 3.9 or higher. **Python 3.12 is recommended**.
Expand Down Expand Up @@ -72,4 +70,4 @@ If you encounter any issues during the installation process:
## Next steps

- Get up and running with your first Dagster project in the [Quickstart](/getting-started/quickstart)
- Learn to [create data assets in Dagster](/guides/build/create-a-pipeline/data-assets)
- Learn to [create data assets in Dagster](/guides/build/create-asset-pipelines/data-assets)
6 changes: 2 additions & 4 deletions docs/docs-beta/docs/getting-started/quickstart.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,10 @@
---
title: "Dagster quickstart"
title: Build your first Dagster project
description: Learn how to quickly get up and running with Dagster
sidebar_position: 30
sidebar_label: "Quickstart"
---

# Build your first Dagster project

Welcome to Dagster! In this guide, you'll use Dagster to create a basic pipeline that:

- Extracts data from a CSV file
Expand Down Expand Up @@ -154,4 +152,4 @@ id,name,age,city,age_group
Congratulations! You've just built and run your first pipeline with Dagster. Next, you can:

- Continue with the [ETL pipeline tutorial](/tutorial/tutorial-etl) to learn how to build a more complex ETL pipeline
- Learn how to [Think in assets](/guides/build/assets-concepts/index.md)
- Learn how to [Think in assets](/guides/build/create-asset-pipelines/assets-concepts/index.md)
4 changes: 2 additions & 2 deletions docs/docs-beta/docs/guides/automate/about-automation.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: About Automation
unlisted: true
---

{/* TODO combine with index page and delete this page */}

There are several ways to automate the execution of your data pipelines with Dagster.

The first system, and the most basic, is the [Schedule](/guides/automate/schedules), which responds to time.
Expand All @@ -24,8 +26,6 @@ as the schedule is processed.
Schedules were one of the first types of automation in Dagster, created before the introduction of Software-Defined Assets.
As such, you may find that many of the examples can seem foreign if you are used to only working within the asset framework.

For more on how assets and ops inter-relate, read about [Assets and Ops](/guides/build/assets-concepts#assets-and-ops)

The `dagster-daemon` process is responsible for submitting runs by checking each schedule at a regular interval to determine
if it's time to execute the underlying job.

Expand Down
21 changes: 8 additions & 13 deletions docs/docs-beta/docs/guides/automate/asset-sensors.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,17 @@
---
title: Triggering cross-job dependencies with Asset Sensors
sidebar_position: 300
sidebar_label: Cross-job dependencies
title: Trigger cross-job dependencies with asset sensors
sidebar_position: 40
---

Asset sensors in Dagster provide a powerful mechanism for monitoring asset materializations and triggering downstream computations or notifications based on those events.

This guide covers the most common use cases for asset sensors, such as defining cross-job and cross-code location dependencies.

<details>
<summary>Prerequisites</summary>
:::note

To follow this guide, you'll need:
This documentation assumes familiarity with [assets](/guides/build/create-asset-pipelines/assets-concepts/index.md) and [ops and jobs](/guides/build/ops-jobs)

- Familiarity with [Assets](/guides/build/assets-concepts/index.mdx
- Familiarity with [Ops and Jobs](/guides/build/ops-jobs)

</details>
:::

## Getting started

Expand Down Expand Up @@ -54,7 +49,7 @@ This is an example of an asset sensor that triggers a job when an asset is mater

<CodeExample filePath="guides/automation/simple-asset-sensor-example.py" language="python" />

## Customize evaluation logic
## Customizing the evaluation function of an asset sensor

You can customize the evaluation function of an asset sensor to include specific logic for deciding when to trigger a run. This allows for fine-grained control over the conditions under which downstream jobs are executed.

Expand Down Expand Up @@ -83,15 +78,15 @@ In the following example, the `@asset_sensor` decorator defines a custom evaluat

<CodeExample filePath="guides/automation/asset-sensor-custom-eval.py" language="python"/>

## Trigger a job with configuration
## Triggering a job with custom configuration

By providing a configuration to the `RunRequest` object, you can trigger a job with a specific configuration. This is useful when you want to trigger a job with custom parameters based on custom logic you define.

For example, you might use a sensor to trigger a job when an asset is materialized, but also pass metadata about that materialization to the job:

<CodeExample filePath="guides/automation/asset-sensor-with-config.py" language="python" />

## Monitor multiple assets
## Monitoring multiple assets

When building a pipeline, you may want to monitor multiple assets with a single sensor. This can be accomplished with a multi-asset sensor.

Expand Down

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
title: Arbitrary Python automation conditions
sidebar_position: 500
unlisted: true
---
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
title: Automation conditions operands and operators
sidebar_position: 600
unlisted: true
---

## Operands

## Operators

## Composite conditions
Loading
Loading