+
## Introduction
Many organization already use [Airflow](https://airflow.apache.org/) to orchestrate their data workflows. dbt Cloud works great with Airflow, letting you execute your dbt code in dbt Cloud while keeping orchestration duties with Airflow. This ensures your project's metadata (important for tools like dbt Explorer) is available and up-to-date, while still enabling you to use Airflow for general tasks such as:
@@ -244,3 +246,5 @@ Yes, either through [Airflow's email/slack](https://www.astronomer.io/guides/err
### How should I plan my dbt Cloud + Airflow implementation?
Check out [this recording](https://www.youtube.com/watch?v=n7IIThR8hGk) of a dbt meetup for some tips.
+
+
\ No newline at end of file
diff --git a/website/docs/guides/bigquery-qs.md b/website/docs/guides/bigquery-qs.md
index 8dbcf24c91a..5ecbf2a9f00 100644
--- a/website/docs/guides/bigquery-qs.md
+++ b/website/docs/guides/bigquery-qs.md
@@ -9,6 +9,8 @@ tags: ['BigQuery', 'dbt Cloud','Quickstart']
recently_updated: true
---
+
+
## Introduction
In this quickstart guide, you'll learn how to use dbt Cloud with BigQuery. It will show you how to:
@@ -293,6 +295,8 @@ Later, you can connect your business intelligence (BI) tools to these views and
This time, when you performed a `dbt run`, separate views/tables were created for `stg_customers`, `stg_orders` and `customers`. dbt inferred the order to run these models. Because `customers` depends on `stg_customers` and `stg_orders`, dbt builds `customers` last. You do not need to explicitly define these dependencies.
+
+
#### FAQs {#faq-2}
+
## Introduction
Creating packages is an **advanced use of dbt**. If you're new to the tool, we recommend that you first use the product for your own analytics before attempting to create a package for others.
@@ -169,3 +171,5 @@ The release notes should contain an overview of the changes introduced in the ne
## Add the package to hub.getdbt.com
Our package registry, [hub.getdbt.com](https://hub.getdbt.com/), gets updated by the [hubcap script](https://github.com/dbt-labs/hubcap). To add your package to hub.getdbt.com, create a PR on the [hubcap repository](https://github.com/dbt-labs/hubcap) to include it in the `hub.json` file.
+
+
\ No newline at end of file
diff --git a/website/docs/guides/codespace-qs.md b/website/docs/guides/codespace-qs.md
index b28b0ddaacf..55cbad14a02 100644
--- a/website/docs/guides/codespace-qs.md
+++ b/website/docs/guides/codespace-qs.md
@@ -8,6 +8,8 @@ hide_table_of_contents: true
tags: ['dbt Core','Quickstart']
---
+
+
## Introduction
In this quickstart guide, you’ll learn how to create a codespace and be able to execute the `dbt build` command from it in _less than 5 minutes_.
@@ -72,3 +74,4 @@ If you'd like to work with a larger selection of Jaffle Shop data, you can gener
As you increase the number of years, it takes exponentially more time to generate the data because the Jaffle Shop stores grow in size and number. For a good balance of data size and time to build, dbt Labs suggests a maximum of 6 years.
+
\ No newline at end of file
diff --git a/website/docs/guides/core-to-cloud-1.md b/website/docs/guides/core-to-cloud-1.md
index 212a44b0adb..f2ee7b016e2 100644
--- a/website/docs/guides/core-to-cloud-1.md
+++ b/website/docs/guides/core-to-cloud-1.md
@@ -10,6 +10,9 @@ tags: ['Migration','dbt Core','dbt Cloud']
level: 'Intermediate'
recently_updated: true
---
+
+
+
## Introduction
Moving from dbt Core to dbt Cloud streamlines analytics engineering workflows by allowing teams to develop, test, deploy, and explore data products using a single, fully managed software service.
@@ -253,3 +256,5 @@ For next steps, we'll soon share other guides on how to manage your move and tip
- Work with the [dbt Labs’ Professional Services](https://www.getdbt.com/dbt-labs/services) team to support your data organization and migration.
+
+
\ No newline at end of file
diff --git a/website/docs/guides/create-new-materializations.md b/website/docs/guides/create-new-materializations.md
index 797d610add5..b3e1c5f4c5f 100644
--- a/website/docs/guides/create-new-materializations.md
+++ b/website/docs/guides/create-new-materializations.md
@@ -11,6 +11,8 @@ level: 'Advanced'
recently_updated: true
---
+
@@ -169,9 +166,7 @@ In Bitbucket:
![View of the Bitbucket window for entering DBT_API_KEY](/img/guides/orchestration/custom-cicd-pipelines/dbt-api-key-bitbucket.png)
Here’s a video showing these steps:
-
-
diff --git a/website/docs/guides/databricks-qs.md b/website/docs/guides/databricks-qs.md
index 848ee19c59a..149a10b9a43 100644
--- a/website/docs/guides/databricks-qs.md
+++ b/website/docs/guides/databricks-qs.md
@@ -7,6 +7,9 @@ hide_table_of_contents: true
recently_updated: true
tags: ['dbt Cloud', 'Quickstart','Databricks']
---
+
+
+
## Introduction
In this quickstart guide, you'll learn how to use dbt Cloud with Databricks. It will show you how to:
@@ -371,6 +374,8 @@ Later, you can connect your business intelligence (BI) tools to these views and
This time, when you performed a `dbt run`, separate views/tables were created for `stg_customers`, `stg_orders` and `customers`. dbt inferred the order to run these models. Because `customers` depends on `stg_customers` and `stg_orders`, dbt builds `customers` last. You do not need to explicitly define these dependencies.
+
+
#### FAQs {#faq-2}
diff --git a/website/docs/guides/dbt-models-on-databricks.md b/website/docs/guides/dbt-models-on-databricks.md
index be1bb62049e..ddfab46e606 100644
--- a/website/docs/guides/dbt-models-on-databricks.md
+++ b/website/docs/guides/dbt-models-on-databricks.md
@@ -12,6 +12,8 @@ level: 'Intermediate'
recently_updated: true
---
+
+
## Introduction
Building on the [Set up your dbt project with Databricks](/guides/set-up-your-databricks-dbt-project) guide, we'd like to discuss performance optimization. In this follow-up post, we outline simple strategies to optimize for cost, performance, and simplicity when you architect data pipelines. We’ve encapsulated these strategies in this acronym-framework:
@@ -180,3 +182,5 @@ With the [dbt Cloud Admin API](/docs/dbt-cloud-apis/admin-cloud-api), you can
This builds on the content in [Set up your dbt project with Databricks](/guides/set-up-your-databricks-dbt-project).
We welcome you to try these strategies on our example open source TPC-H implementation and to provide us with thoughts/feedback as you start to incorporate these features into production. Looking forward to your feedback on [#db-databricks-and-spark](https://getdbt.slack.com/archives/CNGCW8HKL) Slack channel!
+
+
\ No newline at end of file
diff --git a/website/docs/guides/dbt-python-snowpark.md b/website/docs/guides/dbt-python-snowpark.md
index f32358ee2db..f8406dc98c5 100644
--- a/website/docs/guides/dbt-python-snowpark.md
+++ b/website/docs/guides/dbt-python-snowpark.md
@@ -11,6 +11,8 @@ level: 'Intermediate'
recently_updated: true
---
+
+
## Introduction
The focus of this workshop will be to demonstrate how we can use both *SQL and python together* in the same workflow to run *both analytics and machine learning models* on dbt Cloud.
@@ -1923,3 +1925,5 @@ Now that we've completed testing and documenting our work, we're ready to deploy
Fantastic! You’ve finished the workshop! We hope you feel empowered in using both SQL and Python in your dbt Cloud workflows with Snowflake. Having a reliable pipeline to surface both analytics and machine learning is crucial to creating tangible business value from your data.
For more help and information join our [dbt community Slack](https://www.getdbt.com/community/) which contains more than 50,000 data practitioners today. We have a dedicated slack channel #db-snowflake to Snowflake related content. Happy dbt'ing!
+
+
\ No newline at end of file
diff --git a/website/docs/guides/debug-errors.md b/website/docs/guides/debug-errors.md
index febfb6ac422..11f02f325a4 100644
--- a/website/docs/guides/debug-errors.md
+++ b/website/docs/guides/debug-errors.md
@@ -11,6 +11,8 @@ level: 'Beginner'
recently_updated: true
---
+
+
## General process of debugging
Learning how to debug is a skill, and one that will make you great at your role!
@@ -387,3 +389,5 @@ We’ve all been there. dbt uses the last-saved version of a file when you execu
_(More likely for dbt Core users)_
If you just opened a SQL file in the `target/` directory to help debug an issue, it's not uncommon to accidentally edit that file! To avoid this, try changing your code editor settings to grey out any files in the `target/` directory — the visual cue will help avoid the issue.
+
+
\ No newline at end of file
diff --git a/website/docs/guides/debug-schema-names.md b/website/docs/guides/debug-schema-names.md
index 5107588619c..f73ba30fa5a 100644
--- a/website/docs/guides/debug-schema-names.md
+++ b/website/docs/guides/debug-schema-names.md
@@ -12,6 +12,8 @@ level: 'Advanced'
recently_updated: true
---
+
+
## Introduction
If a model uses the [`schema` config](/reference/resource-properties/schema) but builds under an unexpected schema, here are some steps for debugging the issue. The full explanation of custom schemas can be found [here](/docs/build/custom-schemas).
@@ -100,3 +102,5 @@ Now that you understand how a model's schema is being generated, you can adjust
- You can also adjust your `target` details (for example, changing the name of a target)
If you change the logic in `generate_schema_name`, it's important that you consider whether two users will end up writing to the same schema when developing dbt models. This consideration is the reason why the default implementation of the macro concatenates your target schema and custom schema together — we promise we were trying to be helpful by implementing this behavior, but acknowledge that the resulting schema name is unintuitive.
+
+
\ No newline at end of file
diff --git a/website/docs/guides/dremio-lakehouse.md b/website/docs/guides/dremio-lakehouse.md
index 378ec857f6a..8e55931a87c 100644
--- a/website/docs/guides/dremio-lakehouse.md
+++ b/website/docs/guides/dremio-lakehouse.md
@@ -12,6 +12,9 @@ tags: ['Dremio', 'dbt Core']
level: 'Intermediate'
recently_updated: true
---
+
+
+
## Introduction
This guide will demonstrate how to build a data lakehouse with dbt Core 1.5 or newer and Dremio Cloud. You can simplify and optimize your data infrastructure with dbt's robust transformation framework and Dremio’s open and easy data lakehouse. The integrated solution empowers companies to establish a strong data and analytics foundation, fostering self-service analytics and enhancing business insights while simplifying operations by eliminating the necessity to write complex Extract, Transform, and Load (ETL) pipelines.
@@ -194,3 +197,5 @@ GROUP BY vendor_id
This completes the integration setup and data is ready for business consumption.
+
+
\ No newline at end of file
diff --git a/website/docs/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs.md b/website/docs/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs.md
index 3e388949d59..b4cea114f1a 100644
--- a/website/docs/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs.md
+++ b/website/docs/guides/how-to-use-databricks-workflows-to-run-dbt-cloud-jobs.md
@@ -12,6 +12,8 @@ level: 'Intermediate'
recently_updated: true
---
+
+
## Introduction
Using Databricks workflows to call the dbt Cloud job API can be useful for several reasons:
@@ -204,3 +206,5 @@ You can set up workflows directly from the notebook OR by adding this notebook t
Multiple Workflow tasks can be set up using the same notebook by configuring the `job_id` parameter to point to different dbt Cloud jobs.
Using Databricks workflows to access the dbt Cloud job API can improve integration of your data pipeline processes and enable scheduling of more complex workflows.
+
+
\ No newline at end of file
diff --git a/website/docs/guides/manual-install-qs.md b/website/docs/guides/manual-install-qs.md
index b433649299f..513a402e3ac 100644
--- a/website/docs/guides/manual-install-qs.md
+++ b/website/docs/guides/manual-install-qs.md
@@ -8,6 +8,9 @@ icon: 'fa-light fa-square-terminal'
tags: ['dbt Core','Quickstart']
hide_table_of_contents: true
---
+
+
+
## Introduction
When you use dbt Core to work with dbt, you will be editing files locally using a code editor, and running projects using a command line interface (CLI).
@@ -471,3 +474,5 @@ For more info on how to get started, refer to [create and schedule jobs](/docs/d
For more information about using dbt Core to schedule a job, refer [dbt airflow](/blog/dbt-airflow-spiritual-alignment) blog post.
+
+
\ No newline at end of file
diff --git a/website/docs/guides/microsoft-fabric-qs.md b/website/docs/guides/microsoft-fabric-qs.md
index 9189e126e7c..38e7dc173b6 100644
--- a/website/docs/guides/microsoft-fabric-qs.md
+++ b/website/docs/guides/microsoft-fabric-qs.md
@@ -7,6 +7,9 @@ hide_table_of_contents: true
tags: ['dbt Cloud','Quickstart']
recently_updated: true
---
+
+
+
## Introduction
In this quickstart guide, you'll learn how to use dbt Cloud with [Microsoft Fabric](https://www.microsoft.com/en-us/microsoft-fabric). It will show you how to:
@@ -313,6 +316,8 @@ Later, you can connect your business intelligence (BI) tools to these views and
This time, when you performed a `dbt run`, separate views/tables were created for `stg_customers`, `stg_orders` and `customers`. dbt inferred the order to run these models. Because `customers` depends on `stg_customers` and `stg_orders`, dbt builds `customers` last. You do not need to explicitly define these dependencies.
+
+
#### FAQs {#faq-2}
diff --git a/website/docs/guides/migrate-from-spark-to-databricks.md b/website/docs/guides/migrate-from-spark-to-databricks.md
index 99d66ac5876..18140a910b9 100644
--- a/website/docs/guides/migrate-from-spark-to-databricks.md
+++ b/website/docs/guides/migrate-from-spark-to-databricks.md
@@ -12,6 +12,8 @@ level: 'Intermediate'
recently_updated: true
---
+
+
## Introduction
You can migrate your projects from using the `dbt-spark` adapter to using the [dbt-databricks adapter](https://github.com/databricks/dbt-databricks). In collaboration with dbt Labs, Databricks built this adapter using dbt-spark as the foundation and added some critical improvements. With it, you get an easier set up — requiring only three inputs for authentication — and more features such as support for [Unity Catalog](https://www.databricks.com/product/unity-catalog).
@@ -128,3 +130,5 @@ your_profile_name:
```
+
+
\ No newline at end of file
diff --git a/website/docs/guides/migrate-from-stored-procedures.md b/website/docs/guides/migrate-from-stored-procedures.md
index 13dc3085548..36f012de207 100644
--- a/website/docs/guides/migrate-from-stored-procedures.md
+++ b/website/docs/guides/migrate-from-stored-procedures.md
@@ -13,6 +13,8 @@ level: 'Beginner'
recently_updated: true
---
+
+
## Introduction
One of the more common situations that new dbt adopters encounter is a historical codebase of transformations written as a hodgepodge of DDL and DML statements, or stored procedures. Going from DML statements to dbt models is often a challenging hump for new users to get over, because the process involves a significant paradigm shift between a procedural flow of building a dataset (e.g. a series of DDL and DML statements) to a declarative approach to defining a dataset (e.g. how dbt uses SELECT statements to express data models). This guide aims to provide tips, tricks, and common patterns for converting DML statements to dbt models.
@@ -375,3 +377,5 @@ There are a couple important concepts to understand here:
## Migrate Stores procedures
The techniques shared above are useful ways to get started converting the individual DML statements that are often found in stored procedures. Using these types of patterns, legacy procedural code can be rapidly transitioned to dbt models that are much more readable, maintainable, and benefit from software engineering best practices like DRY principles. Additionally, once transformations are rewritten as dbt models, it becomes much easier to test the transformations to ensure that the data being used downstream is high-quality and trustworthy.
+
+
\ No newline at end of file
diff --git a/website/docs/guides/productionize-your-dbt-databricks-project.md b/website/docs/guides/productionize-your-dbt-databricks-project.md
index 3584cffba77..109d64e8282 100644
--- a/website/docs/guides/productionize-your-dbt-databricks-project.md
+++ b/website/docs/guides/productionize-your-dbt-databricks-project.md
@@ -12,6 +12,8 @@ level: 'Intermediate'
recently_updated: true
---
+
+
## Introduction
Welcome to the third installment of our comprehensive series on optimizing and deploying your data pipelines using Databricks and dbt Cloud. In this guide, we'll dive into delivering these models to end users while incorporating best practices to ensure that your production data remains reliable and timely.
@@ -194,3 +196,5 @@ To get the most out of both tools, you can use the [persist docs config](/refere
- [Trigger a dbt Cloud Job in your automated workflow with Python](https://discourse.getdbt.com/t/triggering-a-dbt-cloud-job-in-your-automated-workflow-with-python/2573)
- [Databricks + dbt Cloud Quickstart Guide](/guides/databricks)
- Reach out to your Databricks account team to get access to preview features on Databricks.
+
+
\ No newline at end of file
diff --git a/website/docs/guides/redshift-qs.md b/website/docs/guides/redshift-qs.md
index 0d18d3c5d84..7a017be5dfa 100644
--- a/website/docs/guides/redshift-qs.md
+++ b/website/docs/guides/redshift-qs.md
@@ -6,6 +6,9 @@ icon: 'redshift'
hide_table_of_contents: true
tags: ['Redshift', 'dbt Cloud','Quickstart']
---
+
+
+
## Introduction
In this quickstart guide, you'll learn how to use dbt Cloud with Redshift. It will show you how to:
@@ -384,6 +387,8 @@ Later, you can connect your business intelligence (BI) tools to these views and
This time, when you performed a `dbt run`, separate views/tables were created for `stg_customers`, `stg_orders` and `customers`. dbt inferred the order to run these models. Because `customers` depends on `stg_customers` and `stg_orders`, dbt builds `customers` last. You do not need to explicitly define these dependencies.
+
+
#### FAQs {#faq-2}
@@ -393,4 +398,3 @@ Later, you can connect your business intelligence (BI) tools to these views and
-
diff --git a/website/docs/guides/refactoring-legacy-sql.md b/website/docs/guides/refactoring-legacy-sql.md
index a339e523020..13896c3ace3 100644
--- a/website/docs/guides/refactoring-legacy-sql.md
+++ b/website/docs/guides/refactoring-legacy-sql.md
@@ -13,6 +13,8 @@ level: 'Advanced'
recently_updated: true
---
+
+
## Introduction
You may have already learned how to build dbt models from scratch. But in reality, you probably already have some queries or stored procedures that power analyses and dashboards, and now you’re wondering how to port those into dbt.
@@ -257,3 +259,5 @@ Sure, we could write our own query manually to audit these models, but using the
Head to the free on-demand course, [Refactoring from Procedural SQL to dbt](https://courses.getdbt.com/courses/refactoring-sql-for-modularity) for a more in-depth refactoring example + a practice refactoring problem to test your skills.
Questions on this guide or the course? Drop a note in #learn-on-demand in [dbt Community Slack](https://getdbt.com/community).
+
+
\ No newline at end of file
diff --git a/website/docs/guides/serverless-datadog.md b/website/docs/guides/serverless-datadog.md
index 931ba9832ab..10444ccae9a 100644
--- a/website/docs/guides/serverless-datadog.md
+++ b/website/docs/guides/serverless-datadog.md
@@ -11,6 +11,8 @@ level: 'Advanced'
recently_updated: true
---
+
+
## Introduction
This guide will teach you how to build and host a basic Python app which will add dbt Cloud job events to Datadog. To do this, when a dbt Cloud job completes it will create a log entry for each node that was run, containing all information about the node provided by the [Discovery API](/docs/dbt-cloud-apis/discovery-schema-job-models).
@@ -119,3 +121,5 @@ Set these secrets as follows, replacing `abc123` etc with actual values:
## Deploy your app
After you set your secrets, fly.io will redeploy your application. When it has completed successfully, go back to the dbt Cloud webhook settings and click **Test Endpoint**.
+
+
\ No newline at end of file
diff --git a/website/docs/guides/serverless-pagerduty.md b/website/docs/guides/serverless-pagerduty.md
index 50cc1b2b36e..ffd25f8989c 100644
--- a/website/docs/guides/serverless-pagerduty.md
+++ b/website/docs/guides/serverless-pagerduty.md
@@ -11,6 +11,8 @@ level: 'Advanced'
recently_updated: true
---
+
+
## Introduction
This guide will teach you how to build and host a basic Python app which will monitor dbt Cloud jobs and create PagerDuty alarms based on failure. To do this, when a dbt Cloud job completes it will:
@@ -123,3 +125,5 @@ flyctl secrets set DBT_CLOUD_SERVICE_TOKEN=abc123 DBT_CLOUD_AUTH_TOKEN=def456 PD
## Deploy your app
After you set your secrets, fly.io will redeploy your application. When it has completed successfully, go back to the dbt Cloud webhook settings and click **Test Endpoint**.
+
+
diff --git a/website/docs/guides/set-up-ci.md b/website/docs/guides/set-up-ci.md
index 89d7c5a14fa..39f730f669d 100644
--- a/website/docs/guides/set-up-ci.md
+++ b/website/docs/guides/set-up-ci.md
@@ -11,6 +11,8 @@ level: 'Intermediate'
recently_updated: true
---
+
+
## Introduction
By validating your code _before_ it goes into production, you don't need to spend your afternoon fielding messages from people whose reports are suddenly broken.
@@ -353,3 +355,5 @@ Adding a regularly-scheduled job inside of the QA environment whose only command
When the Release Manager is ready to cut a new release, they will manually open a PR from `qa` into `main` from their git provider (e.g. GitHub, GitLab, Azure DevOps). dbt Cloud will detect the new PR, at which point the existing check in the CI environment will trigger and run. When using the [baseline configuration](/guides/set-up-ci), it's possible to kick off the PR creation from inside of the dbt Cloud IDE. Under this paradigm, that button will create PRs targeting your QA branch instead.
To test your new flow, create a new branch in the dbt Cloud IDE then add a new file or modify an existing one. Commit it, then create a new Pull Request (not a draft) against your `qa` branch. You'll see the integration tests begin to run. Once they complete, manually create a PR against `main`, and within a few seconds you’ll see the tests run again but this time incorporating all changes from all code that hasn't been merged to main yet.
+
+
\ No newline at end of file
diff --git a/website/docs/guides/set-up-your-databricks-dbt-project.md b/website/docs/guides/set-up-your-databricks-dbt-project.md
index b2988f36589..c874fb486d1 100644
--- a/website/docs/guides/set-up-your-databricks-dbt-project.md
+++ b/website/docs/guides/set-up-your-databricks-dbt-project.md
@@ -12,6 +12,8 @@ level: 'Intermediate'
recently_updated: true
---
+
+
## Introduction
Databricks and dbt Labs are partnering to help data teams think like software engineering teams and ship trusted data, faster. The dbt-databricks adapter enables dbt users to leverage the latest Databricks features in their dbt project. Hundreds of customers are now using dbt and Databricks to build expressive and reliable data pipelines on the Lakehouse, generating data assets that enable analytics, ML, and AI use cases throughout the business.
@@ -114,3 +116,5 @@ Next, you’ll need somewhere to store and version control your code that allows
### Next steps
Now that your project is configured, you can start transforming your Databricks data with dbt. To help you scale efficiently, we recommend you follow our best practices, starting with the [Unity Catalog best practices](/best-practices/dbt-unity-catalog-best-practices), then you can [Optimize dbt models on Databricks](/guides/optimize-dbt-models-on-databricks).
+
+
\ No newline at end of file
diff --git a/website/docs/guides/sl-migration.md b/website/docs/guides/sl-migration.md
index 8e99cffdefe..df0ada9d7e5 100644
--- a/website/docs/guides/sl-migration.md
+++ b/website/docs/guides/sl-migration.md
@@ -11,6 +11,8 @@ level: 'Intermediate'
recently_updated: true
---
+
+
## Introduction
The legacy Semantic Layer will be deprecated in H2 2023. Additionally, the `dbt_metrics` package will not be supported in dbt v1.6 and later. If you are using `dbt_metrics`, you'll need to upgrade your configurations before upgrading to v1.6. This guide is for people who have the legacy dbt Semantic Layer setup and would like to migrate to the new dbt Semantic Layer. The estimated migration time is two weeks.
@@ -138,3 +140,5 @@ If you created a new environment in [Step 3](#step-3-setup-the-semantic-layer-in
- [Why we're deprecating the dbt_metrics package](/blog/deprecating-dbt-metrics) blog post
- [dbt Semantic Layer API query syntax](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata)
- [dbt Semantic Layer on-demand courses](https://courses.getdbt.com/courses/semantic-layer)
+
+
\ No newline at end of file
diff --git a/website/docs/guides/sl-partner-integration-guide.md b/website/docs/guides/sl-partner-integration-guide.md
index f55ebb41435..cf19f914d30 100644
--- a/website/docs/guides/sl-partner-integration-guide.md
+++ b/website/docs/guides/sl-partner-integration-guide.md
@@ -11,6 +11,8 @@ level: 'Advanced'
recently_updated: true
---
+
+
## Introduction
To fit your tool within the world of the Semantic Layer, dbt Labs offers some best practice recommendations for how to expose metrics and allow users to interact with them seamlessly.
@@ -174,3 +176,6 @@ These are recommendations on how to evolve a Semantic Layer integration and not
- [Use the dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl) to learn about the product.
- [Build your metrics](/docs/build/build-metrics-intro) for more info about MetricFlow and its components.
- [dbt Semantic Layer integrations page](https://www.getdbt.com/product/semantic-layer-integrations) for information about the available partner integrations.
+
+
+
\ No newline at end of file
diff --git a/website/docs/guides/snowflake-qs.md b/website/docs/guides/snowflake-qs.md
index 6d8a4494285..21801b72355 100644
--- a/website/docs/guides/snowflake-qs.md
+++ b/website/docs/guides/snowflake-qs.md
@@ -6,6 +6,9 @@ icon: 'snowflake'
tags: ['dbt Cloud','Quickstart','Snowflake']
hide_table_of_contents: true
---
+
+
+
## Introduction
In this quickstart guide, you'll learn how to use dbt Cloud with Snowflake. It will show you how to:
@@ -474,7 +477,9 @@ Sources make it possible to name and describe the data loaded into your warehous
models will still query from the same raw data source in Snowflake. By using `source`, you can
test and document your raw data and also understand the lineage of your sources.
+
+
diff --git a/website/docs/guides/starburst-galaxy-qs.md b/website/docs/guides/starburst-galaxy-qs.md
index 080b6d9f411..edf33ccd77c 100644
--- a/website/docs/guides/starburst-galaxy-qs.md
+++ b/website/docs/guides/starburst-galaxy-qs.md
@@ -6,6 +6,9 @@ icon: 'starburst'
hide_table_of_contents: true
tags: ['dbt Cloud','Quickstart']
---
+
+
+
## Introduction
In this quickstart guide, you'll learn how to use dbt Cloud with [Starburst Galaxy](https://www.starburst.io/platform/starburst-galaxy/). It will show you how to:
@@ -407,6 +410,8 @@ Later, you can connect your business intelligence (BI) tools to these views and
This time, when you performed a `dbt run`, separate views/tables were created for `stg_customers`, `stg_orders` and `customers`. dbt inferred the order to run these models. Because `customers` depends on `stg_customers` and `stg_orders`, dbt builds `customers` last. You do not need to explicitly define these dependencies.
+
+
#### FAQs {#faq-2}
diff --git a/website/docs/guides/using-jinja.md b/website/docs/guides/using-jinja.md
index 9f098bb637f..6622d4c4900 100644
--- a/website/docs/guides/using-jinja.md
+++ b/website/docs/guides/using-jinja.md
@@ -11,6 +11,8 @@ level: 'Advanced'
recently_updated: true
---
+
+
## Introduction
In this guide, we're going to take a common pattern used in SQL, and then use Jinja to improve our code.
@@ -345,3 +347,5 @@ group by 1
You can then remove the macros that we built in previous steps. Whenever you're trying to solve a problem that you think others may have solved previously, it's worth checking the [dbt-utils](https://hub.getdbt.com/dbt-labs/dbt_utils/latest/) package to see if someone has shared their code!
+
+
\ No newline at end of file
diff --git a/website/docs/guides/zapier-ms-teams.md b/website/docs/guides/zapier-ms-teams.md
index 66596d590e0..d841ca3305a 100644
--- a/website/docs/guides/zapier-ms-teams.md
+++ b/website/docs/guides/zapier-ms-teams.md
@@ -10,6 +10,9 @@ tags: ['Webhooks']
level: 'Advanced'
recently_updated: true
---
+
+
+
## Introduction
This guide will show you how to set up an integration between dbt Cloud jobs and Microsoft Teams using [dbt Cloud Webhooks](/docs/deploy/webhooks) and Zapier, similar to the [native Slack integration](/docs/deploy/job-notifications#slack-notifications).
@@ -169,3 +172,5 @@ When you're happy with it, remember to ensure that your `run_id` and `account_id
- If you post to a chat instead of a team channel, you don't need to add the Zapier app to Microsoft Teams.
- If you post to a chat instead of a team channel, note that markdown is not supported and you will need to remove the markdown formatting.
- If you chose the **Catch Hook** trigger instead of **Catch Raw Hook**, you will need to pass each required property from the webhook as an input instead of running `json.loads()` against the raw body. You will also need to remove the validation code.
+
+
\ No newline at end of file
diff --git a/website/docs/guides/zapier-refresh-mode-report.md b/website/docs/guides/zapier-refresh-mode-report.md
index 5bab165b11d..a1446b3be7c 100644
--- a/website/docs/guides/zapier-refresh-mode-report.md
+++ b/website/docs/guides/zapier-refresh-mode-report.md
@@ -11,6 +11,8 @@ level: 'Advanced'
recently_updated: true
---
+
+
## Introduction
This guide will teach you how to refresh a Mode dashboard when a dbt Cloud job has completed successfully and there is fresh data available. The integration will:
@@ -131,3 +133,5 @@ return
## Test and deploy
You can iterate on the Code step by modifying the code and then running the test again. When you're happy with it, you can publish your Zap.
+
+
\ No newline at end of file
diff --git a/website/docs/guides/zapier-refresh-tableau-workbook.md b/website/docs/guides/zapier-refresh-tableau-workbook.md
index f614b64eaa2..31a78324eb5 100644
--- a/website/docs/guides/zapier-refresh-tableau-workbook.md
+++ b/website/docs/guides/zapier-refresh-tableau-workbook.md
@@ -11,6 +11,8 @@ level: 'Advanced'
recently_updated: true
---
+
+
## Introduction
This guide will teach you how to refresh a Tableau workbook that leverages [extracts](https://help.tableau.com/current/pro/desktop/en-us/extracting_data.htm) when a dbt Cloud job has completed successfully and there is fresh data available. The integration will:
@@ -170,3 +172,5 @@ return {"message": "Workbook refresh has been queued"}
## Test and deploy
To make changes to your code, you can modify it and test it again. When you're happy with it, you can publish your Zap.
+
+
\ No newline at end of file
diff --git a/website/docs/guides/zapier-slack.md b/website/docs/guides/zapier-slack.md
index 61b96658f95..d4d3ab0823c 100644
--- a/website/docs/guides/zapier-slack.md
+++ b/website/docs/guides/zapier-slack.md
@@ -11,6 +11,8 @@ level: 'Advanced'
recently_updated: true
---
+
+
## Introduction
This guide will show you how to set up an integration between dbt Cloud jobs and Slack using [dbt Cloud webhooks](/docs/deploy/webhooks) and Zapier. It builds on the native [native Slack integration](/docs/deploy/job-notifications#slack-notifications) by attaching error message details of models and tests in a thread.
@@ -309,3 +311,5 @@ Set the **Message Text** to **5. Threaded Errors Post** from the Run Python step
### 8. Test and deploy
When you're done testing your Zap, publish it.
+
+