diff --git a/.github/ISSUE_TEMPLATE/a-improve-docs.yml b/.github/ISSUE_TEMPLATE/a-improve-docs.yml index 70b173e49a4..c9030bc227b 100644 --- a/.github/ISSUE_TEMPLATE/a-improve-docs.yml +++ b/.github/ISSUE_TEMPLATE/a-improve-docs.yml @@ -5,7 +5,7 @@ body: - type: markdown attributes: value: | - * You can ask questions or submit ideas for the dbt docs in [Discussions](https://github.com/dbt-labs/docs.getdbt.com/discussions) + * You can ask questions or submit ideas for the dbt docs in [Issues](https://github.com/dbt-labs/docs.getdbt.com/issues/new/choose) * Before you file an issue read the [Contributing guide](https://github.com/dbt-labs/docs.getdbt.com#contributing). * Check to make sure someone hasn't already opened a similar [issue](https://github.com/dbt-labs/docs.getdbt.com/issues). diff --git a/.github/ISSUE_TEMPLATE/improve-the-site.yml b/.github/ISSUE_TEMPLATE/improve-the-site.yml index dd585324f89..01ebdea711a 100644 --- a/.github/ISSUE_TEMPLATE/improve-the-site.yml +++ b/.github/ISSUE_TEMPLATE/improve-the-site.yml @@ -5,7 +5,7 @@ body: - type: markdown attributes: value: | - * You can ask questions or submit ideas for the dbt docs in [Discussions](https://github.com/dbt-labs/docs.getdbt.com/discussions) + * You can ask questions or submit ideas for the dbt docs in [Issues](https://github.com/dbt-labs/docs.getdbt.com/issues/new/choose) * Before you file an issue read the [Contributing guide](https://github.com/dbt-labs/docs.getdbt.com#contributing). * Check to make sure someone hasn't already opened a similar [issue](https://github.com/dbt-labs/docs.getdbt.com/issues). diff --git a/.github/workflows/labeler.yml b/.github/workflows/labeler.yml index 7e4bb5c268a..cc231cdcde3 100644 --- a/.github/workflows/labeler.yml +++ b/.github/workflows/labeler.yml @@ -5,8 +5,8 @@ name: "Pull Request Labeler" on: -- pull_request_target - + pull_request_target: + types: [opened] jobs: triage: permissions: diff --git a/README.md b/README.md index da82ab45fd6..c749fedf95a 100644 --- a/README.md +++ b/README.md @@ -17,7 +17,7 @@ Creating an inclusive and equitable environment for our documents is more import We welcome contributions from community members to this repo: - **Fixes**: When you notice an error, you can use the `Edit this page` button at the bottom of each page to suggest a change. - **New documentation**: If you contributed code in [dbt-core](https://github.com/dbt-labs/dbt-core), we encourage you to also write the docs here! Please reach out in the dbt community if you need help finding a place for these docs. -- **Major rewrites**: You can [file an issue](https://github.com/dbt-labs/docs.getdbt.com/issues/new?assignees=&labels=content%2Cimprovement&template=improve-docs.yml) or [start a discussion](https://github.com/dbt-labs/docs.getdbt.com/discussions) to propose ideas for a content area that requires attention. +- **Major rewrites**: You can [file an issue](https://github.com/dbt-labs/docs.getdbt.com/issues/new/choose) to propose ideas for a content area that requires attention. You can use components documented in the [docusaurus library](https://v2.docusaurus.io/docs/markdown-features/). diff --git a/website/dbt-versions.js b/website/dbt-versions.js index 3eff99e7f98..8689547fd67 100644 --- a/website/dbt-versions.js +++ b/website/dbt-versions.js @@ -27,6 +27,10 @@ exports.versions = [ ] exports.versionedPages = [ + { + "page": "reference/resource-configs/store_failures_as", + "firstVersion": "1.7", + }, { "page": "docs/build/build-metrics-intro", "firstVersion": "1.6", diff --git a/website/docs/community/resources/oss-expectations.md b/website/docs/community/resources/oss-expectations.md index 649a9dea94f..9c916de1240 100644 --- a/website/docs/community/resources/oss-expectations.md +++ b/website/docs/community/resources/oss-expectations.md @@ -4,7 +4,7 @@ title: "Expectations for OSS contributors" Whether it's a dbt package, a plugin, `dbt-core`, or this very documentation site, contributing to the open source code that supports the dbt ecosystem is a great way to level yourself up as a developer, and to give back to the community. The goal of this page is to help you understand what to expect when contributing to dbt open source software (OSS). While we can only speak for our own experience as open source maintainers, many of these guidelines apply when contributing to other open source projects, too. -Have you seen things in other OSS projects that you quite like, and think we could learn from? [Open a discussion on the Developer Hub](https://github.com/dbt-labs/docs.getdbt.com/discussions/new), or start a conversation in the dbt Community Slack (for example: `#community-strategy`, `#dbt-core-development`, `#package-ecosystem`, `#adapter-ecosystem`). We always appreciate hearing from you! +Have you seen things in other OSS projects that you quite like, and think we could learn from? [Open a discussion on the dbt Community Forum](https://discourse.getdbt.com), or start a conversation in the dbt Community Slack (for example: `#community-strategy`, `#dbt-core-development`, `#package-ecosystem`, `#adapter-ecosystem`). We always appreciate hearing from you! ## Principles @@ -51,7 +51,7 @@ An issue could be a bug you’ve identified while using the product or reading t ### Best practices for issues -- Issues are **not** for support / troubleshooting / debugging help. Please [open a discussion on the Developer Hub](https://github.com/dbt-labs/docs.getdbt.com/discussions/new), so other future users can find and read proposed solutions. If you need help formulating your question, you can post in the `#advice-dbt-help` channel in the [dbt Community Slack](https://www.getdbt.com/community/). +- Issues are **not** for support / troubleshooting / debugging help. Please [open a discussion on the dbt Community Forum](https://discourse.getdbt.com), so other future users can find and read proposed solutions. If you need help formulating your question, you can post in the `#advice-dbt-help` channel in the [dbt Community Slack](https://www.getdbt.com/community/). - Always search existing issues first, to see if someone else had the same idea / found the same bug you did. - Many repositories offer templates for creating issues, such as when reporting a bug or requesting a new feature. If available, please select the relevant template and fill it out to the best of your ability. This will help other people understand your issue and respond. diff --git a/website/docs/docs/build/cumulative-metrics.md b/website/docs/docs/build/cumulative-metrics.md index 3104fd7578a..708045c1f3e 100644 --- a/website/docs/docs/build/cumulative-metrics.md +++ b/website/docs/docs/build/cumulative-metrics.md @@ -36,6 +36,13 @@ metrics: ``` +## Limitations +Cumulative metrics are currently under active development and have the following limitations: + +1. You can only use the [`metric_time` dimension](/docs/build/dimensions#time) to check cumulative metrics. If you don't use `metric_time` in the query, the cumulative metric will return incorrect results because it won't perform the time spine join. This means you cannot reference time dimensions other than the `metric_time` in the query. +2. If you use `metric_time` in your query filter but don't include "start_time" and "end_time," cumulative metrics will left-censor the input data. For example, if you query a cumulative metric with a 7-day window with the filter `{{ TimeDimension('metric_time') }} BETWEEN '2023-08-15' AND '2023-08-30' `, the values for `2023-08-15` to `2023-08-20` return missing or incomplete data. This is because we apply the `metric_time` filter to the aggregation input. To avoid this, you must use `start_time` and `end_time` in the query filter. + + ## Cumulative metrics example diff --git a/website/docs/docs/build/dimensions.md b/website/docs/docs/build/dimensions.md index 49ae9045021..c82a9f8ced4 100644 --- a/website/docs/docs/build/dimensions.md +++ b/website/docs/docs/build/dimensions.md @@ -102,7 +102,7 @@ dimensions: To use BigQuery as your data platform, time dimensions columns need to be in the datetime data type. If they are stored in another type, you can cast them to datetime using the `expr` property. Time dimensions are used to group metrics by different levels of time, such as day, week, month, quarter, and year. MetricFlow supports these granularities, which can be specified using the `time_granularity` parameter. ::: -Time has additional parameters specified under the `type_params` section. When you query one or more metrics in MetricFlow using the CLI, the default time dimension for a single metric is the primary time dimension, which you can refer to as `metric_time` or use the dimensions' name. +Time has additional parameters specified under the `type_params` section. When you query one or more metrics in MetricFlow using the CLI, the default time dimension for a single metric is the aggregation time dimension, which you can refer to as `metric_time` or use the dimensions' name. You can use multiple time groups in separate metrics. For example, the `users_created` metric uses `created_at`, and the `users_deleted` metric uses `deleted_at`: diff --git a/website/docs/docs/build/groups.md b/website/docs/docs/build/groups.md index 7ac5337ba0d..d4fda045277 100644 --- a/website/docs/docs/build/groups.md +++ b/website/docs/docs/build/groups.md @@ -19,7 +19,7 @@ This functionality is new in v1.5. ## About groups -A group is a collection of nodes within a dbt DAG. Groups are named, and every group has an `owner`. They enable intentional collaboration within and across teams by restricting [access to private](/reference/resource-properties/access) models. +A group is a collection of nodes within a dbt DAG. Groups are named, and every group has an `owner`. They enable intentional collaboration within and across teams by restricting [access to private](/reference/resource-configs/access) models. Group members may include models, tests, seeds, snapshots, analyses, and metrics. (Not included: sources and exposures.) Each node may belong to only one group. @@ -94,7 +94,7 @@ select ... ### Referencing a model in a group -By default, all models within a group have the `protected` [access modifier](/reference/resource-properties/access). This means they can be referenced by downstream resources in _any_ group in the same project, using the [`ref`](/reference/dbt-jinja-functions/ref) function. If a grouped model's `access` property is set to `private`, only resources within its group can reference it. +By default, all models within a group have the `protected` [access modifier](/reference/resource-configs/access). This means they can be referenced by downstream resources in _any_ group in the same project, using the [`ref`](/reference/dbt-jinja-functions/ref) function. If a grouped model's `access` property is set to `private`, only resources within its group can reference it. diff --git a/website/docs/docs/build/materializations.md b/website/docs/docs/build/materializations.md index 463651ccc77..ae75b575d5f 100644 --- a/website/docs/docs/build/materializations.md +++ b/website/docs/docs/build/materializations.md @@ -140,8 +140,7 @@ required with incremental materializations less configuration options available, see your database platform's docs for more details * Materialized views may not be supported by every database platform * **Advice:** - * Consider materialized views for use cases where incremental models are sufficient, -but you would like the data platform to manage the incremental logic and refresh. + * Consider materialized views for use cases where incremental models are sufficient, but you would like the data platform to manage the incremental logic and refresh. ## Python materializations diff --git a/website/docs/docs/build/metricflow-time-spine.md b/website/docs/docs/build/metricflow-time-spine.md index 254fa3cc5f0..997d85e38a8 100644 --- a/website/docs/docs/build/metricflow-time-spine.md +++ b/website/docs/docs/build/metricflow-time-spine.md @@ -12,6 +12,8 @@ To create this table, you need to create a model in your dbt project called `met + + ```sql {{ config( @@ -38,8 +40,44 @@ final as ( select * from final ``` + + + + + +```sql +{{ + config( + materialized = 'table', + ) +}} + +with days as ( + + {{ + dbt.date_spine( + 'day', + "to_date('01/01/2000','mm/dd/yyyy')", + "to_date('01/01/2027','mm/dd/yyyy')" + ) + }} + +), + +final as ( + select cast(date_day as date) as date_day + from days +) + +select * from final +``` + + + + + ```sql -- filename: metricflow_time_spine.sql -- BigQuery supports DATE() instead of TO_DATE(). Use this model if you're using BigQuery @@ -61,4 +99,33 @@ final as ( select * from final ``` + + + + + +```sql +-- filename: metricflow_time_spine.sql +-- BigQuery supports DATE() instead of TO_DATE(). Use this model if you're using BigQuery +{{config(materialized='table')}} +with days as ( + {{dbt.date_spine( + 'day', + "DATE(2000,01,01)", + "DATE(2030,01,01)" + ) + }} +), + +final as ( + select cast(date_day as date) as date_day + from days +) + +select * +from final +``` + + + You only need to include the `date_day` column in the table. MetricFlow can handle broader levels of detail, but it doesn't currently support finer grains. diff --git a/website/docs/docs/build/metrics-overview.md b/website/docs/docs/build/metrics-overview.md index e6d875386ee..8761901dc59 100644 --- a/website/docs/docs/build/metrics-overview.md +++ b/website/docs/docs/build/metrics-overview.md @@ -16,7 +16,7 @@ The keys for metrics definitions are: | `description` | Provide the description for your metric. | Optional | | `type` | Define the type of metric, which can be `simple`, `ratio`, `cumulative`, or `derived`. | Required | | `type_params` | Additional parameters used to configure metrics. `type_params` are different for each metric type. | Required | -| `configs` | Provide the specific configurations for your metric. | Optional | +| `config` | Provide the specific configurations for your metric. | Optional | | `label` | The display name for your metric. This value will be shown in downstream tools. | Required | | `filter` | You can optionally add a filter string to any metric type, applying filters to dimensions, entities, or time dimensions during metric computation. Consider it as your WHERE clause. | Optional | | `meta` | Additional metadata you want to add to your metric. | Optional | @@ -31,10 +31,10 @@ metrics: type: the type of the metric ## Required type_params: ## Required - specific properties for the metric type - configs: here for `enabled` ## Optional + config: here for `enabled` ## Optional label: The display name for your metric. This value will be shown in downstream tools. ## Required filter: | ## Optional - {{ Dimension('entity__name') }} > 0 and {{ Dimension(' entity__another name') }} is not + {{ Dimension('entity__name') }} > 0 and {{ Dimension(' entity__another_name') }} is not null ``` diff --git a/website/docs/docs/build/semantic-models.md b/website/docs/docs/build/semantic-models.md index bb56bd212e6..3341f49609a 100644 --- a/website/docs/docs/build/semantic-models.md +++ b/website/docs/docs/build/semantic-models.md @@ -8,17 +8,20 @@ sidebar_label: Semantic models tags: [Metrics, Semantic Layer] --- -Semantic models serve as the foundation for defining data in MetricFlow, which powers the dbt Semantic Layer. You can think of semantic models as nodes in your semantic graph, connected via entities as edges. MetricFlow takes semantic models defined in YAML configuration files as inputs and creates a semantic graph that can be used to query metrics. +Semantic models are the foundation for data definition in MetricFlow, which powers the dbt Semantic Layer: -Each semantic model corresponds to a dbt model in your DAG. Therefore you will have one YAML config for each semantic model in your dbt project. You can create multiple semantic models out of a single dbt model, as long as you give each semantic model a unique name. - -You can configure semantic models in your dbt project directory in a `YAML` file. Depending on your project structure, you can nest semantic models under a `metrics:` folder or organize them under project sources. +- Think of semantic models as nodes connected by entities in a semantic graph. +- MetricFlow uses YAML configuration files to create this graph for querying metrics. +- Each semantic model corresponds to a dbt model in your DAG, requiring a unique YAML configuration for each semantic model. +- You can create multiple semantic models from a single dbt model, as long as you give each semantic model a unique name. +- Configure semantic models in a YAML file within your dbt project directory. +- Organize them under a `metrics:` folder or within project sources as needed. Semantic models have 6 components and this page explains the definitions with some examples: | Component | Description | Type | | --------- | ----------- | ---- | -| [Name](#name) | Unique name for the semantic model | Required | +| [Name](#name) | Choose a unique name for the semantic model. Avoid using double underscores (__) in the name as they're not supported. | Required | | [Description](#description) | Includes important details in the description | Optional | | [Model](#model) | Specifies the dbt model for the semantic model using the `ref` function | Required | | [Defaults](#defaults) | The defaults for the model, currently only `agg_time_dimension` is supported. | Required | @@ -26,6 +29,7 @@ Semantic models have 6 components and this page explains the definitions with so | [Primary Entity](#primary-entity) | If a primary entity exists, this component is Optional. If the semantic model has no primary entity, then this property is required. | Optional | | [Dimensions](#dimensions) | Different ways to group or slice data for a metric, they can be `time` or `categorical` | Required | | [Measures](#measures) | Aggregations applied to columns in your data model. They can be the final metric or used as building blocks for more complex metrics | Optional | +| Label | The display name for your semantic model `node`, `dimension`, `entity`, and/or `measures` | Optional | ## Semantic models components @@ -105,9 +109,32 @@ semantic_models: type: categorical ``` + + +Semantic models support configs in either the schema file or at the project level. + +Semantic model config in `models/semantic.yml`: +```yml +semantic_models: + - name: orders + config: + enabled: true | false + group: some_group +``` + +Semantic model config in `dbt_project.yml`: +```yml +semantic_models: + my_project_name: + +enabled: true | false + +group: some_group +``` + + + ### Name -Define the name of the semantic model. You must define a unique name for the semantic model. The semantic graph will use this name to identify the model, and you can update it at any time. +Define the name of the semantic model. You must define a unique name for the semantic model. The semantic graph will use this name to identify the model, and you can update it at any time. Avoid using double underscores (__) in the name as they're not supported. ### Description @@ -205,8 +232,7 @@ For semantic models with a measure, you must have a [primary time group](/docs/b | `agg` | dbt supports the following aggregations: `sum`, `max`, `min`, `count_distinct`, and `sum_boolean`. | Required | | `expr` | You can either reference an existing column in the table or use a SQL expression to create or derive a new one. | Optional | | `non_additive_dimension` | Non-additive dimensions can be specified for measures that cannot be aggregated over certain dimensions, such as bank account balances, to avoid producing incorrect results. | Optional | -| `create_metric`* | You can create a metric directly from a measure with create_metric: True and specify its display name with create_metric_display_name. | Optional | -_*Coming soon_ +| `create_metric` | You can create a metric directly from a measure with `create_metric: True` and specify its display name with create_metric_display_name. Default is false. | Optional | import SetUpPages from '/snippets/_metrics-dependencies.md'; diff --git a/website/docs/docs/build/tests.md b/website/docs/docs/build/tests.md index fa78d0df905..75ee5992a76 100644 --- a/website/docs/docs/build/tests.md +++ b/website/docs/docs/build/tests.md @@ -241,7 +241,7 @@ where {{ column_name }} is null ## Storing test failures -Normally, a test query will calculate failures as part of its execution. If you set the optional `--store-failures` flag or [`store_failures` config](/reference/resource-configs/store_failures), dbt will first save the results of a test query to a table in the database, and then query that table to calculate the number of failures. +Normally, a test query will calculate failures as part of its execution. If you set the optional `--store-failures` flag, the [`store_failures`](/reference/resource-configs/store_failures), or the [`store_failures_as`](/reference/resource-configs/store_failures_as) configs, dbt will first save the results of a test query to a table in the database, and then query that table to calculate the number of failures. This workflow allows you to query and examine failing records much more quickly in development: diff --git a/website/docs/docs/cloud/about-cloud/about-cloud-ide.md b/website/docs/docs/cloud/about-cloud/about-cloud-ide.md index f0380f109f8..7643928feec 100644 --- a/website/docs/docs/cloud/about-cloud/about-cloud-ide.md +++ b/website/docs/docs/cloud/about-cloud/about-cloud-ide.md @@ -25,7 +25,7 @@ With the Cloud IDE, you can: For more information, read the complete [Cloud IDE guide](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud). -## Relatd docs +## Related docs - [IDE user interface](/docs/cloud/dbt-cloud-ide/ide-user-interface) - [Tips and tricks](/docs/cloud/dbt-cloud-ide/dbt-cloud-tips) diff --git a/website/docs/docs/cloud/cloud-cli-installation.md b/website/docs/docs/cloud/cloud-cli-installation.md index 68a8ef365d6..44d411bbf2d 100644 --- a/website/docs/docs/cloud/cloud-cli-installation.md +++ b/website/docs/docs/cloud/cloud-cli-installation.md @@ -12,99 +12,189 @@ These instructions are not intended for general audiences at this time. ::: -## Installing dbt Cloud CLI -### Install and update with Brew on MacOS (recommended) +dbt Cloud natively supports developing using a command line (CLI), empowering team members to contribute with enhanced flexibility and collaboration. The dbt Cloud CLI allows you to run dbt commands against your dbt Cloud development environment from your local command line. -1. Install the dbt Cloud CLI: +dbt commands are run against dbt Cloud's infrastructure and benefit from: + +* Secure credential storage in the dbt Cloud platform. +* Automatic deferral of build artifacts to your Cloud project's production environment. +* Speedier, lower-cost builds. +* Support for dbt Mesh ([cross-project `ref`](/docs/collaborate/govern/project-dependencies)), +* Significant platform improvements, to be released over the coming months. + + +## Install dbt Cloud CLI + +You can install the dbt Cloud CLI on the command line by using one of these methods: + + + + + +Before you begin, make sure you have [Homebrew installed](http://brew.sh/) in your code editor or command line terminal. If your operating system runs into path conflicts, refer to the [FAQs](#faqs). + +1. Run the following command to verify that there is no conflict with a dbt Core installation on your system: + +```bash +which dbt +``` + - This should return a `dbt not found`. If the dbt help text appears, use `pip uninstall dbt` to deactivate dbt Core from your machine. + +2. Install the dbt Cloud CLI with Homebrew: ```bash brew tap dbt-labs/dbt-cli brew install dbt-cloud-cli ``` -2. Verify the installation by requesting your homebrew installation path (not your dbt core installs). If the `which dbt` command returns nothing, then you should modify your PATH in `~.zshrc` or create an alias. +3. Verify the installation by running `dbt --help` from the command line. If the help text doesn't indicate that you're using the dbt Cloud CLI, make sure you've deactivated your pyenv or venv and don't have a version of dbt globally installed. + +* You no longer need to use the `dbt deps` command. Previously, you had to run that command. + + + + + +If your operating system runs into path conflicts, refer to the [FAQs](#faqs). + +1. Download the latest Windows release for your platform from [GitHub](https://github.com/dbt-labs/dbt-cli/releases). + +2. Extract the `dbt.exe` executeable into the same folder as your dbt project. + +:::info + +Advanced users can configure multiple projects to use the same dbt Cloud CLI by placing the executable in the Program Files folder and [adding it to their Windows PATH environment variable](https://medium.com/@kevinmarkvi/how-to-add-executables-to-your-path-in-windows-5ffa4ce61a53). + +Note that if you are using VS Code, you'll need to restart it to pick up modified environment variables. +::: + +3. Verify the installation by running `./dbt --help` from the command line. If the help text doesn't indicate that you're using the dbt Cloud CLI, make sure you've deactivated your pyenv or venv and don't have a version of dbt globally installed. + +* You no longer need to use the `dbt deps` command. Previously, you had to run that command. + + + + + +Refer to the [FAQs](#faqs) if your operating system runs into path conflicts. + +1. Download the latest Linux release for your platform from [GitHub](https://github.com/dbt-labs/dbt-cli/releases). (Pick the file based on your CPU architecture) + +2. Extract the `dbt-cloud-cli` binary to the same folder as your dbt project. ```bash -which dbt -dbt --help +tar -xf dbt_0.29.9_linux_amd64.tar.gz +./dbt --version ``` -### Manually install (Windows and Linux) +:::info -1. Download the latest release for your platform from [GitHub](https://github.com/dbt-labs/dbt-cli/releases). -2. Add the `dbt` executable to your path. -3. Move to a directory with a dbt project, and create a `dbt_cloud.yml` file containing your `project-id` from dbt Cloud. -4. Invoke `dbt --help` from your terminal to see a list of supported commands. +Advanced users can configure multiple projects to use the same Cloud CLI executable by adding it to their PATH environment variable in their shell profile. -#### Updating your dbt Cloud installation (Windows + Linux) +::: -Follow the same process in [Installing dbt Cloud CLI](#manually-install-windows-only) and replace the existing `dbt` executable with the new one. You should not have to go through the security steps again. +3. Verify the installation by running `./dbt --help` from the command line. If the help text doesn't indicate that you're using the dbt Cloud CLI, make sure you've deactivated your pyenv or venv and don't have a version of dbt globally installed. -## Setting up the CLI +* You no longer need to use the `dbt deps` command. Previously, you had to run that command. -The following instructions are for setting up the dbt Cloud CLI. + -1. Ensure that you have created a project in [dbt Cloud](https://cloud.getdbt.com/). + -2. Ensure that your personal [development credentials](https://cloud.getdbt.com/settings/profile/credentials) are set on the project. +## Update dbt Cloud CLI -3. Navigate to [your profile](https://cloud.getdbt.com/settings/profile) and enable the **Beta** flag under **Experimental Features.** +The following instructions explain how to update the dbt CLoud CLI to the latest version depending on your operating system. -4. Create an environment variable with your [dbt Cloud API key](https://cloud.getdbt.com/settings/profile#api-access): + + + -```bash -vi ~/.zshrc +To update the dbt Cloud CLI, run `brew upgrade dbt-cloud-cli`. -# dbt Cloud CLI -export DBT_CLOUD_API_KEY="1234" # Replace "1234" with your API key -``` + + + +To update, follow the same process explained in [Install manually (Windows)](/docs/cloud/cloud-cli-installation?install=windows#install-dbt-cloud-cli) and replace the existing `dbt.exe` executable with the new one. + + -5. Load the new environment variable. Note: You may need to reactivate your Python virtual environment after sourcing your shell's dot file. Alternatively, restart your shell instead of sourcing the shell's dot file + + +## Configure the dbt Cloud CLI + +After installation, you can configure the dbt Cloud CLI for your dbt Cloud project and use it to run [dbt commands](/reference/dbt-commands) similar to dbt Core. For example, you can execute the following command to compile a project using dbt Cloud: ```bash -source ~/.zshrc +dbt compile ``` -6. Navigate to a dbt project +**Prerequisites** + +- You must set up a project in dbt Cloud. +- You must have your [personal development credentials](/docs/dbt-cloud-environments#set-developer-credentials) set for that project. The dbt Cloud CLI will use these credentials, stored securely in dbt Cloud, to communicate with your data platform. +- You must [enroll](/docs/dbt-versions/experimental-features) in the dbt Cloud beta features. + - To enroll, navigate to your **Profile Settings** and enable the **Beta** flag under **Experimental Features**. + +Once you install the dbt Cloud CLI, you need to configure it to connect to a dbt Cloud project. + +1. Ensure you meet the prerequisites above. +2. Create an environment variable with your [dbt Cloud API key](/docs/dbt-cloud-apis/user-tokens): + - On MacOS, Linux, or Windows add an environment variable: + + ```bash + export DBT_CLOUD_API_KEY="1234" # Replace 1234 with your API key + ``` + + - In Powershell, add an environment variable: + - Note that this variable resets if you restart your shell. To add an environment variable permanently, add a system environment variable in your platform. + +3. Navigate to a dbt project in your terminal: ```bash cd ~/dbt-projects/jaffle_shop ``` -7. Create a `dbt_cloud.yml` in the root project directory. The file is required to have a `project-id` field with a valid [project ID](#glossary). Enter the following commands: +4. In your `dbt_project.yml` file, ensure there is a section titled `dbt-cloud`. This section is required to have a `project-id` field with a valid project ID. -```bash -pwd # Input -/Users/user/dbt-projects/jaffle_shop # Output -``` +```yaml +# dbt_project.yml +name: -```bash -echo "project-id: ''" > dbt_cloud.yml # Input -``` +version: +... -```bash -cat dbt_cloud.yml # Input -project-id: '123456' # Output +dbt-cloud: + project-id: PROJECT_ID ``` -You can find your project ID by selecting your project and clicking on **Develop** in the navigation bar. Your project ID is the number in the URL: https://cloud.getdbt.com/develop/26228/projects/PROJECT_ID. +- To find your project ID, go to **Develop** in the navigation menu. Select the dbt Cloud project URL, such as `https://cloud.getdbt.com/develop/26228/projects123456`, where the project ID is `123456`. -If `dbt_cloud.yml` already exists, edit the file, and verify the project ID field uses a valid project ID. -#### Upgrade the CLI with Brew +## Use the dbt Cloud CLI -```bash -brew update -brew upgrade dbt-cloud-cli -``` +The dbt Cloud CLI shares the same set of commands as dbt Core. When you invoke a dbt command, that command is sent to dbt Cloud for processing. + +The dbt Cloud CLI supports [project dependencies](/docs/collaborate/govern/project-dependencies), which is an exciting way to depend on another project using the metadata service in dbt Cloud. It instantly resolves references (or `ref`) to public models defined in other projects. You don't need to execute or analyze these upstream models yourself. Instead, you treat them as an API that returns a dataset. + +Share feedback or request features you'd like to see on the [dbt community Slack](https://getdbt.slack.com/archives/C05M77P54FL). + + +## FAQs + +
+ +What's the difference between the dbt Cloud CLI and dbt Core? +The dbt Cloud CLI and dbt Core, an open-source project, are both command line tools that enable you to run dbt commands. The key distinction is the dbt Cloud CLI is tailored for dbt Cloud's infrastructure and integrates with all its features. + +
-## Using dbt Cloud CLI +
+How do I solve for path conflicts +For compatibility, both the dbt Cloud CLI and dbt Core are invoked by running `dbt`. This can create path conflicts if your operating system selects one over the other based on your $PATH environment variable (settings). -**Coming soon** +If you have dbt Core installed locally, ensure that you deactivate your Python environment or uninstall it using `pip uninstall dbt` before proceeding. Alternatively, advanced users can modify the $PATH environment variable to correctly point to the dbt Cloud CLI binary to use both dbt Cloud CLI and dbt Core together. -## Glossary +You can always uninstall the Cloud CLI to return to using dbt Core. +
-- **dbt cloud API key:** Your API key found by navigating to the **gear icon**, clicking **Profile Settings**, and scrolling down to **API**. -- **Project ID:** The ID of the dbt project you're working with. Can be retrieved from the dbt Cloud URL after a project has been selected, for example, `https://cloud.getdbt.com/deploy/{accountID}/projects/{projectID}` -- **Development credentials:** Your personal warehouse credentials for the project you’re working with. They can be set by selecting the project and entering them in dbt Cloud. Navigate to the **gear icon**, click **Profile Settings**, and click **Credentials** from the left-side menu. diff --git a/website/docs/docs/cloud/connect-data-platform/about-connections.md b/website/docs/docs/cloud/connect-data-platform/about-connections.md index 65bfac3a90d..5d5a903b346 100644 --- a/website/docs/docs/cloud/connect-data-platform/about-connections.md +++ b/website/docs/docs/cloud/connect-data-platform/about-connections.md @@ -13,6 +13,10 @@ dbt Cloud can connect with a variety of data platform providers including: - [Snowflake](/docs/cloud/connect-data-platform/connect-snowflake) - [Starburst or Trino](/docs/cloud/connect-data-platform/connect-starburst-trino) +import MSCallout from '/snippets/_microsoft-adapters-soon.md'; + + + You can connect to your database in dbt Cloud by clicking the gear in the top right and selecting **Account Settings**. From the Account Settings page, click **+ New Project**. diff --git a/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md b/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md index 8ffd83ef00e..6a86f1aa14b 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/lint-format.md @@ -45,7 +45,11 @@ With the dbt Cloud IDE, you can seamlessly use [SQLFluff](https://sqlfluff.com/) - Works with Jinja and SQL, - Comes with built-in [linting rules](https://docs.sqlfluff.com/en/stable/rules.html). You can also [customize](#customize-linting) your own linting rules. - Empowers you to [enable linting](#enable-linting) with options like **Lint** (displays linting errors and recommends actions) or **Fix** (auto-fixes errors in the IDE). -- Displays a **Code Quality** tab to view code errors, and provides code quality visibility and management. +- Displays a **Code Quality** tab to view code errors, and provides code quality visibility and management. + +:::info Ephemeral models not supported +Linting doesn't support ephemeral models in dbt v1.5 and lower. Refer to the [FAQs](#faqs) for more info. +::: ### Enable linting @@ -223,6 +227,12 @@ Currently, running SQLFluff commands from the terminal isn't supported. Make sure you're on a development branch. Formatting or Linting isn't available on "main" or "read-only" branches. +
+Why is there inconsistent SQLFluff behavior when running outside the dbt Cloud IDE (such as a GitHub Action)? +— Double-check your SQLFluff version matches the one in dbt Cloud IDE (found in the Code Quality tab after a lint operation).

+— If your lint operation passes despite clear rule violations, confirm you're not linting models with ephemeral models. Linting doesn't support ephemeral models in dbt v1.5 and lower. +
+ ## Related docs - [User interface](/docs/cloud/dbt-cloud-ide/ide-user-interface) diff --git a/website/docs/docs/cloud/manage-access/set-up-bigquery-oauth.md b/website/docs/docs/cloud/manage-access/set-up-bigquery-oauth.md index 516a340c951..73de602e2ba 100644 --- a/website/docs/docs/cloud/manage-access/set-up-bigquery-oauth.md +++ b/website/docs/docs/cloud/manage-access/set-up-bigquery-oauth.md @@ -1,6 +1,6 @@ --- title: "Set up BigQuery OAuth" -description: "Learn how dbt Cloud administrators can use licenses and seats to control access in a dbt Cloud account." +description: "Learn how dbt Cloud administrators can use BigQuery OAuth to control access in a dbt Cloud account" id: "set-up-bigquery-oauth" --- diff --git a/website/docs/docs/cloud/manage-access/set-up-databricks-oauth.md b/website/docs/docs/cloud/manage-access/set-up-databricks-oauth.md new file mode 100644 index 00000000000..679133b7844 --- /dev/null +++ b/website/docs/docs/cloud/manage-access/set-up-databricks-oauth.md @@ -0,0 +1,77 @@ +--- +title: "Set up Databricks OAuth" +description: "Learn how dbt Cloud administrators can use Databricks OAuth to control access in a dbt Cloud account." +id: "set-up-databricks-oauth" +--- + +:::info Enterprise Feature + +This guide describes a feature of the dbt Cloud Enterprise plan. If you’re interested in learning more about an Enterprise plan, contact us at sales@getdbt.com. + +::: + +dbt Cloud supports developer OAuth ([OAuth for partner solutions](https://docs.databricks.com/en/integrations/manage-oauth.html)) with Databricks, providing an additional layer of security for dbt enterprise users. When you enable Databricks OAuth for a dbt Cloud project, all dbt Cloud developers must authenticate with Databricks in order to use the dbt Cloud IDE. The project's deployment environments will still leverage the Databricks authentication method set at the environment level. + +:::tip Beta Feature + +Databricks OAuth support in dbt Cloud is a [beta feature](/docs/dbt-versions/product-lifecycles#dbt-cloud) and subject to change without notification. More updates to this feature coming soon. + +Current limitations: +- Databrick's OAuth applications are in public preview +- The current experience requires the IDE to be restarted every hour (access tokens expire after 1 hour - [workaround](https://docs.databricks.com/en/integrations/manage-oauth.html#override-the-default-token-lifetime-policy-for-dbt-core-power-bi-or-tableau-desktop)) + +::: + +### Configure Databricks OAuth (Databricks admin) + +To get started, you will need to [add dbt as an OAuth application](https://docs.databricks.com/en/integrations/configure-oauth-dbt.html) with Databricks, in 2 steps: + +1. From your terminal, [authenticate to the Databricks Account API](https://docs.databricks.com/en/integrations/configure-oauth-dbt.html#authenticate-to-the-account-api) with the Databricks CLI. You authenticate using: + - OAuth for users ([prerequisites](https://docs.databricks.com/en/dev-tools/auth.html#oauth-u2m-auth)) + - Oauth for service principals ([prerequisites](https://docs.databricks.com/en/dev-tools/auth.html#oauth-m2m-auth)) + - Username and password (must be account admin) +2. In the same terminal, **add dbt Cloud as an OAuth application** using `curl` and the [OAuth Custom App Integration API](https://docs.databricks.com/api/account/customappintegration/create) + +For the second step, you can use this example `curl` to authenticate with your username and password, replacing values as defined in the following table: + +```shell +curl -u USERNAME:PASSWORD https://accounts.cloud.databricks.com/api/2.0/accounts/ACCOUNT_ID/oauth2/custom-app-integrations -d '{"redirect_urls": ["https://YOUR_ACCESS_URL", "https://YOUR_ACCESS_URL/complete/databricks"], "confidential": true, "name": "NAME", "scopes": ["sql", "offline_access"]}' +``` + +These parameters and descriptions will help you authenticate with your username and password: + +| Parameter | Description | +| ------ | ----- | +| **USERNAME** | Your Databricks username (account admin level) | +| **PASSWORD** | Your Databricks password (account admin level) | +| **ACCOUNT_ID** | Your Databricks [account ID](https://docs.databricks.com/en/administration-guide/account-settings/index.html#locate-your-account-id) | +| **YOUR_ACCESS_URL** | The [appropriate Access URL](/docs/cloud/about-cloud/regions-ip-addresses) for your dbt Cloud account region and plan | +| **NAME** | The integration name (i.e 'databricks-dbt-cloud') + +After running the `curl`, you'll get an API response that includes the `client_id` and `client_secret` required in the following section. At this time, this is the only way to retrieve the secret. If you lose the secret, then the integration needs to be [deleted](https://docs.databricks.com/api/account/customappintegration/delete) and re-created. + + +### Configure the Connection in dbt Cloud (dbt Cloud project admin) + +Now that you have an OAuth app set up in Databricks, you'll need to add the client ID and secret to dbt Cloud. To do so: + - go to Settings by clicking the gear in the top right. + - on the left, select **Projects** under **Account Settings** + - choose your project from the list + - select **Connection** to edit the connection details + - add the `OAuth Client ID` and `OAuth Client Secret` from the Databricks OAuth app under the **Optional Settings** section + + + +### Authenticating to Databricks (dbt Cloud IDE developer) + +Once the Databricks connection via OAuth is set up for a dbt Cloud project, each dbt Cloud user will need to authenticate with Databricks in order to use the IDE. To do so: + +- Click the gear icon at the top right and select **Profile settings**. +- Select **Credentials**. +- Choose your project from the list +- Select `OAuth` as the authentication method, and click **Save** +- Finalize by clicking the **Connect Databricks Account** button + + + +You will then be redirected to Databricks and asked to approve the connection. This redirects you back to dbt Cloud. You should now be an authenticated Databricks user, ready to use the dbt Cloud IDE. diff --git a/website/docs/docs/collaborate/explore-projects.md b/website/docs/docs/collaborate/explore-projects.md index a4c914259ef..683ce754b8a 100644 --- a/website/docs/docs/collaborate/explore-projects.md +++ b/website/docs/docs/collaborate/explore-projects.md @@ -1,25 +1,14 @@ --- -title: "Explore your dbt projects (beta)" -sidebar_label: "Explore dbt projects (beta)" +title: "Explore your dbt projects" +sidebar_label: "Explore dbt projects" description: "Learn about dbt Explorer and how to interact with it to understand, improve, and leverage your data pipelines." --- -With dbt Explorer, you can view your project's [resources](/docs/build/projects) (such as models, tests, and metrics) and their lineage to gain a better understanding of its latest production state. Navigate and manage your projects within dbt Cloud to help your data consumers discover and leverage your dbt resources. +With dbt Explorer, you can view your project's [resources](/docs/build/projects) (such as models, tests, and metrics) and their lineage to gain a better understanding of its latest production state. Navigate and manage your projects within dbt Cloud to help you and other data developers, analysts, and consumers discover and leverage your dbt resources. -To display the details about your [project state](/docs/dbt-cloud-apis/project-state), dbt Explorer utilizes the metadata provided through the [Discovery API](/docs/dbt-cloud-apis/discovery-api). The metadata that's available on your project depends on the [deployment environment](/docs/deploy/deploy-environments) you've designated as _production_ in your dbt Cloud project. dbt Explorer automatically retrieves the metadata updates after each job run in the production deployment environment so it will always have the latest state on your project. The metadata it displays depends on the [commands executed by the jobs](/docs/deploy/job-commands). For instance: +:::tip Public preview -- To update model details or results, you must run `dbt run` or `dbt build` on a given model within a job in the environment. -- To view catalog statistics and columns, you must run `dbt docs generate` within a job in the environment. -- To view test results, you must run `dbt test` or `dbt build` within a job in the environment. -- To view source freshness check results, you must run `dbt source freshness` within a job in the environment. - -The need to run these commands will diminish, and richer, more timely metadata will become available as the Discovery API and its underlying platform evolve. - -:::tip Join the beta - -dbt Explorer is a [beta feature](/docs/dbt-versions/product-lifecycles#dbt-cloud) and subject to change without notification. More updates to this feature coming soon. - -If you’re interested in joining the beta, please contact your account team. +Try dbt Explorer! It's available in [Public Preview](/docs/dbt-versions/product-lifecycles#dbt-cloud) as of October 17, 2023 for dbt Cloud customers. More updates coming soon. ::: @@ -28,115 +17,218 @@ If you’re interested in joining the beta, please contact your account team. - You have a [multi-tenant](/docs/cloud/about-cloud/tenancy#multi-tenant) or AWS single-tenant dbt Cloud account on the [Team or Enterprise plan](https://www.getdbt.com/pricing/). - You have set up a [production deployment environment](/docs/deploy/deploy-environments#set-as-production-environment-beta) for each project you want to explore. - There has been at least one successful job run in the production deployment environment. -- You are on the dbt Explorer page. This requires the feature to be enabled for your account. - - To go to the page, select **Explore (Beta)** from the top navigation bar in dbt Cloud. +- You are on the dbt Explorer page. To do this, select **Explore** from the top navigation bar in dbt Cloud. + + +## Generate metadata + +dbt Explorer uses the metadata provided by the [Discovery API](/docs/dbt-cloud-apis/discovery-api) to display the details about [the state of your project](/docs/dbt-cloud-apis/project-state). The metadata that's available depends on the [deployment environment](/docs/deploy/deploy-environments) you've designated as _production_ in your dbt Cloud project. dbt Explorer automatically retrieves the metadata updates after each job run in the production deployment environment so it always has the latest results for your project. + +To view a resource and its metadata, you must define the resource in your project and run a job in the production environment. The resulting metadata depends on the [commands executed by the jobs](/docs/deploy/job-commands). + +For a richer experience with dbt Explorer, you must: + +- Run [dbt run](/reference/commands/run) or [dbt build](/reference/commands/build) on a given model within a job in the environment to update model details or results. +- Run [dbt docs generate](/reference/commands/cmd-docs) within a job in the environment to view catalog statistics and columns for models, sources, and snapshots. +- Run [dbt test](/reference/commands/test) or [dbt build](/reference/commands/build) within a job in the environment to view test results. +- Run [dbt source freshness](/reference/commands/source#dbt-source-freshness) within a job in the environment to view source freshness data. +- Run [dbt snapshot](/reference/commands/snapshot) or [dbt build](/reference/commands/build) within a job in the environment to view snapshot details. + +Richer and more timely metadata will become available as dbt, the Discovery API, and the underlying dbt Cloud platform evolves. -## Explore the project’s lineage +## Explore your project's lineage graph {#project-lineage} -dbt Explorer provides a visualization of your project’s DAG that you can interact with. To start, select **Overview** in the left sidebar and click the **Explore Lineage** button on the main (center) section of the page. +dbt Explorer provides a visualization of your project’s DAG that you can interact with. To access the project's full lineage graph, select **Overview** in the left sidebar and click the **Explore Lineage** button on the main (center) section of the page. -If you don't see the lineage graph immediately, click **Render Lineage**. It can take some time for the graph to render depending on the size of your project and your computer’s available memory. The graph of very large projects might not render so, instead, you can select a subset of nodes by using selectors. +If you don't see the project lineage graph immediately, click **Render Lineage**. It can take some time for the graph to render depending on the size of your project and your computer’s available memory. The graph of very large projects might not render so you can select a subset of nodes by using selectors, instead. -The nodes in the lineage graph represent the project’s resources and the edges represent the relationships between the nodes. Resources like tests and macros display in the lineage within their [resource details pages](#view-resource-details) but not within the overall project lineage graph. Nodes are color-coded and include iconography according to their resource type. +The nodes in the lineage graph represent the project’s resources and the edges represent the relationships between the nodes. Nodes are color-coded and include iconography according to their resource type. -To interact with the lineage graph, you can: +To explore the lineage graphs of tests and macros, view [their resource details pages](#view-resource-details). By default, dbt Explorer excludes these resources from the full lineage graph unless a search query returns them as results. + +To interact with the full lineage graph, you can: - Hover over any item in the graph to display the resource’s name and type. - Zoom in and out on the graph by mouse-scrolling. -- Grab and move the graph. -- Click on a resource to highlight its relationship with other resources in your project. -- [Search and select specific resources](#search-resources) or a subset of the DAG using selectors and lineage (for example, `+[YOUR_RESOURCE_NAME]` displays all nodes upstream of a particular resource). -- [View resource details](#view-resource-details) by selecting a node in the graph (double-clicking). +- Grab and move the graph and the nodes. +- Select a resource to highlight its relationship with other resources in your project. A panel opens on the graph’s right-hand side that displays a high-level summary of the resource’s details. The side panel includes a **General** tab for information like description, materialized type, and other details. + - Click the Share icon in the side panel to copy the graph’s link to your clipboard. + - Click the View Resource icon in the side panel to [view the resource details](#view-resource-details). +- [Search and select specific resources](#search-resources) or a subset of the DAG using selectors and graph operators. For example: + - `+[RESOURCE_NAME]` — Displays all parent nodes of the resource + - `resource_type:model [RESOURCE_NAME]` — Displays all models matching the name search +- [View resource details](#view-resource-details) by selecting a node (double-clicking) in the graph. +- Click the List view icon in the graph's upper right corner to return to the main **Explore** page. - + ## Search for resources {#search-resources} -With the search bar (on the upper left of the page or in a lineage graph), you can search using keywords or selectors (also known as *selector methods*). The resources that match your search criteria will display as a table in the main section of the page. When you select a resource in the table, its [resource details page](#view-resource-details) will display. +With the search bar (on the upper left corner of the page or in a lineage graph), you can search with keywords or by using [node selection syntax](/reference/node-selection/syntax). The resources that match your search criteria will display as a lineage graph and a table in the main section of the page. + +Select a node (single-click) in the lineage graph to highlight its relationship with your other search results and to display which project contains the resource's definition. When you choose a node (double-click) in the lineage graph or when you select a resource in the table, dbt Explorer displays the [resource's details page](#view-resource-details). -When using keyword search, dbt Explorer will search through your resources using metadata such as resource type, resource name, column name, source name, tags, schema, database, version, alias/identifier, and package name. +### Search with keywords +When searching with keywords, dbt Explorer searches through your resource metadata (such as resource type, resource name, column name, source name, tags, schema, database, version, alias/identifier, and package name) and returns any matches. -When using selector search, you can utilize the dbt node selection syntax including set and graph operators (like `+`). To learn more about selectors, refer to [Syntax overview](/reference/node-selection/syntax), [Graph operators](/reference/node-selection/graph-operators), and [Set operators](/reference/node-selection/set-operators). +### Search with selector methods -Below are the selection methods currently available in dbt Explorer. For more information about each of them, refer to [Methods](/reference/node-selection/methods). +You can search with [selector methods](/reference/node-selection/methods). Below are the selectors currently available in dbt Explorer: -- **fqn:** — Find resources by [file or fully qualified name](/reference/node-selection/methods#the-file-or-fqn-method). -- **source:** — Find resources by a specified [source](/reference/node-selection/methods#the-source-method). -- **resource_type:** — Find resources by their [type](/reference/node-selection/methods#the-resource_type-method). -- **package:** — Find resources by the [dbt package](/reference/node-selection/methods#the-package-method) that defines them. -- **tag:** — Find resources by a specified [tag](/reference/node-selection/methods#the-tag-method). +- `fqn:` — Find resources by [file or fully qualified name](/reference/node-selection/methods#the-fqn-method). This selector is the search bar's default. If you want to use the default, it's unnecessary to add `fqn:` before the search term. +- `source:` — Find resources by a specified [source](/reference/node-selection/methods#the-source-method). +- `resource_type:` — Find resources by their [type](/reference/node-selection/methods#the-resource_type-method). +- `package:` — Find resources by the [dbt package](/reference/node-selection/methods#the-package-method) that defines them. +- `tag:` — Find resources by a specified [tag](/reference/node-selection/methods#the-tag-method). -- **group:** — Find models defined within a specified [group](/reference/node-selection/methods#the-group-method). -- **access:** — Find models based on their [access](/reference/node-selection/methods#the-access-method) property. +- `group:` — Find models defined within a specified [group](/reference/node-selection/methods#the-group-method). +- `access:` — Find models based on their [access](/reference/node-selection/methods#the-access-method) property. - +### Search with graph operators + +You can use [graph operators](/reference/node-selection/graph-operators) on keywords or selector methods. For example, `+orders` returns all the parents of `orders`. + +### Search with set operators + +You can use multiple selector methods in your search query with [set operators](/reference/node-selection/set-operators). A space implies a union set operator and a comma for an intersection. For example: +- `resource_type:metric,tag:nightly` — Returns metrics with the tag `nightly` +- `+snowplow_sessions +fct_orders` — Returns resources that are parent nodes of either `snowplow_sessions` or `fct_orders` -## Use the catalog sidebar +### Search with both keywords and selector methods -By default, the catalog sidebar lists all your project’s resources. Select any resource type in the list and all those resources in the project will display as a table in the main section of the page. For a description on the different resource types (like models, metrics, and so on), refer to [About dbt projects](https://docs.getdbt.com/docs/build/projects). +You can use keyword search to highlight results that are filtered by the selector search. For example, if you don't have a resource called `customers`, then `resource_type:metric customers` returns all the metrics in your project and highlights those that are related to the term `customers` in the name, in a column, tagged as customers, and so on. + +When searching in this way, the selectors behave as filters that you can use to narrow the search and keywords as a way to find matches within those filtered results. + + + +## Browse with the sidebar + +By default, the catalog sidebar lists all your project’s resources. Select any resource type in the list and all those resources in the project will display as a table in the main section of the page. For a description on the different resource types (like models, metrics, and so on), refer to [About dbt projects](/docs/build/projects). To browse using a different view, you can choose one of these options from the **View by** dropdown: - **Resources** (default) — All resources in the project organized by type. -- **Packages** — All resources in the project organized by the project in which they are defined. +- **Packages** — All resources in the project organized by the dbt package in which they are defined. - **File Tree** — All resources in the project organized by the file in which they are defined. This mirrors the file tree in your dbt project repository. -- **Database** — All resources in the project organized by the database and schema in which they are built. This mirrors your data platform structure. +- **Database** — All resources in the project organized by the database and schema in which they are built. This mirrors your data platform's structure that represents the [applied state](/docs/dbt-cloud-apis/project-state) of your project. + + - +## View model versions + +If models in the project are versioned, you can see which [version of the model](/docs/collaborate/govern/model-versions) is being applied — `prerelease`, `latest`, and `old` — in the title of the model’s details page and in the model list from the sidebar. ## View resource details {#view-resource-details} -You can view the definition and latest run results of any resource in your project. To find a resource and view its details, you can interact with the lineage graph, use search, or browse the catalog. The details (metadata) available to you depends on the resource’s type, its definition, and the [commands](/docs/deploy/job-commands) run within jobs in the production environment. +You can view the definition and latest run results of any resource in your project. To find a resource and view its details, you can interact with the lineage graph, use search, or browse the catalog. - +The details (metadata) available to you depends on the resource’s type, its definition, and the [commands](/docs/deploy/job-commands) that run within jobs in the production environment. + ### Example of model details An example of the details you might get for a model: -- **General** — The model’s lineage graph that you can interact with. -- **Code** — The source code and compiled code for the model. -- **Columns** — The available columns in the model. -- **Description** — A [description of the model](/docs/collaborate/documentation#adding-descriptions-to-your-project). -- **Recent** — Information on the last time the model ran, how long it ran for, whether the run was successful, the job ID, and the run ID. -- **Tests** — [Tests](/docs/build/tests) for the model. -- **Details** — Key properties like the model’s relation name (for example, how it’s represented and how you can query it in the data platform: `database.schema.identifier`); model governance attributes like access, group, and if contracted; and more. -- **Relationships** — The nodes the model **Depends On** and is **Referenced by.** +- Status bar (below the page title) — Information on the last time the model ran, whether the run was successful, how the data is materialized, number of rows, and the size of the model. +- **General** tab includes: + - **Lineage** graph — The model’s lineage graph that you can interact with. The graph includes one parent node and one child node from the model. Click the Expand icon in the graph's upper right corner to view the model in full lineage graph mode. + - **Description** section — A [description of the model](/docs/collaborate/documentation#adding-descriptions-to-your-project). + - **Recent** section — Information on the last time the model ran, how long it ran for, whether the run was successful, the job ID, and the run ID. + - **Tests** section — [Tests](/docs/build/tests) for the model. + - **Details** section — Key properties like the model’s relation name (for example, how it’s represented and how you can query it in the data platform: `database.schema.identifier`); model governance attributes like access, group, and if contracted; and more. + - **Relationships** section — The nodes the model **Depends On**, is **Referenced by**, and (if applicable) is **Used by** for projects that have declared the models' project as a dependency. +- **Code** tab — The source code and compiled code for the model. +- **Columns** tab — The available columns in the model. This tab also shows tests results (if any) that you can select to view the test's details page. A :white_check_mark: denotes a passing test. + ### Example of exposure details An example of the details you might get for an exposure: -- **Status** — The status on data freshness and data quality. -- **Lineage** — The exposure’s lineage graph. -- **Description** — A description of the exposure. -- **Details** — Details like exposure type, maturity, owner information, and more. -- **Relationships** — The nodes the exposure **Depends On**. +- Status bar (below the page title) — Information on the last time the exposure was updated. +- **General** tab includes: + - **Status** section — The status on data freshness and data quality. + - **Lineage** graph — The exposure’s lineage graph. Click the Expand icon in the graph's upper right corner to view the exposure in full lineage graph mode. + - **Description** section — A description of the exposure. + - **Details** section — Details like exposure type, maturity, owner information, and more. + - **Relationships** section — The nodes the exposure **Depends On**. ### Example of test details An example of the details you might get for a test: -- **General** — The test’s lineage graph that you can interact with. -- **Code** — The source code and compiled code for the test. -- **Description** — A description of the test. -- **Recent** — Information on the last time the test ran, how long it ran for, whether the test passed, the job ID, and the run ID. -- **Details** — Details like schema, severity, package, and more. -- **Relationships** — The nodes the test **Depends On**. +- Status bar (below the page title) — Information on the last time the test ran, whether the test passed, test name, test target, and column name. +- **General** tab includes: + - **Lineage** graph — The test’s lineage graph that you can interact with. The graph includes one parent node and one child node from the test resource. Click the Expand icon in the graph's upper right corner to view the test in full lineage graph mode. + - **Description** section — A description of the test. + - **Recent** section — Information on the last time the test ran, how long it ran for, whether the test passed, the job ID, and the run ID. + - **Details** section — Details like schema, severity, package, and more. + - **Relationships** section — The nodes the test **Depends On**. +- **Code** tab — The source code and compiled code for the test. + ### Example of source details An example of the details you might get for each source table within a source collection: -- **General** — The source’s lineage graph that you can interact with. -- **Columns** — The available columns in the source. -- **Description** — A description of the source. -- **Source freshness** — Information on whether refreshing the data was successful, the last time the source was loaded, the timestamp of when a run generated data, and the run ID. -- **Details** — Details like database, schema, and more. -- **Relationships** — A table that lists all the sources used with their freshness status, the timestamp of when freshness was last checked, and the timestamp of when the source was last loaded. \ No newline at end of file +- Status bar (below the page title) — Information on the last time the source was updated and the number of tables the source uses. +- **General** tab includes: + - **Lineage** graph — The source’s lineage graph that you can interact with. The graph includes one parent node and one child node from the source. Click the Expand icon in the graph's upper right corner to view the source in full lineage graph mode. + - **Description** section — A description of the source. + - **Source freshness** section — Information on whether refreshing the data was successful, the last time the source was loaded, the timestamp of when a run generated data, and the run ID. + - **Details** section — Details like database, schema, and more. + - **Relationships** section — A table that lists all the sources used with their freshness status, the timestamp of when freshness was last checked, and the timestamp of when the source was last loaded. +- **Columns** tab — The available columns in the source. This tab also shows tests results (if any) that you can select to view the test's details page. A :white_check_mark: denotes a passing test. + +## About project-level lineage +You can also view all the different projects and public models in the account, where the public models are defined, and how they are used to gain a better understanding about your cross-project resources. + +When viewing the resource-level lineage graph for a given project that uses cross-project references, you can see cross-project relationships represented in the DAG. The iconography is slightly different depending on whether you're viewing the lineage of an upstream producer project or a downstream consumer project. + +When viewing an upstream (parent) project that produces public models that are imported by downstream (child) projects, public models will have a counter icon in their upper right corner that indicates the number of projects that declare the current project as a dependency. Selecting that model reveals the lineage to show the specific projects that are dependent on this model. Projects show up in this counter if they declare the parent project as a dependency in its `dependencies.yml` regardless of whether or not there's a direct `{{ ref() }}` against the public model. Selecting a project node from a public model opens the resource-level lineage graph for that project, which is subject to your permissions. + + + +When viewing a downstream (child) project that imports and refs public models from upstream (parent) projects, public models will show up in the lineage graph and display an icon on the graph edge that indicates what the relationship is to a model from another project. Hovering over this icon indicates the specific dbt Cloud project that produces that model. Double-clicking on a model from another project opens the resource-level lineage graph of the parent project, which is subject to your permissions. + + + + +### Explore the project-level lineage graph + +For cross-project collaboration, you can interact with the DAG in all the same ways as described in [Explore your project's lineage](#project-lineage) but you can also interact with it at the project level and view the details. + +To get a list view of all the projects, select the account name at the top of the **Explore** page near the navigation bar. This view includes a public model list, project list, and a search bar for project searches. You can also view the project-level lineage graph by clicking the Lineage view icon in the page's upper right corner. + +If you have permissions for a project in the account, you can view all public models used across the entire account. However, you can only view full public model details and private models if you have permissions for a project where the models are defined. + +From the project-level lineage graph, you can: + +- Click the Lineage view icon (in the graph’s upper right corner) to view the cross-project lineage graph. +- Click the List view icon (in the graph’s upper right corner) to view the project list. + - Select a project from the **Projects** tab to switch to that project’s main **Explore** page. + - Select a model from the **Public Models** tab to view the [model’s details page](#view-resource-details). + - Perform searches on your projects with the search bar. +- Select a project node in the graph (double-clicking) to switch to that particular project’s lineage graph. + +When you select a project node in the graph, a project details panel opens on the graph’s right-hand side where you can: + +- View counts of the resources defined in the project. +- View a list of its public models, if any. +- View a list of other projects that uses the project, if any. +- Click **Open Project Lineage** to switch to the project’s lineage graph. +- Click the Share icon to copy the project panel link to your clipboard so you can share the graph with someone. + + + +## Related content +- [Enterprise permissions](/docs/cloud/manage-access/enterprise-permissions) +- [About model governance](/docs/collaborate/govern/about-model-governance) +- [What is data mesh?](https://www.getdbt.com/blog/what-is-data-mesh-the-definition-and-importance-of-data-mesh) blog diff --git a/website/docs/docs/collaborate/git-version-control.md b/website/docs/docs/collaborate/git-version-control.md index 4444f381bb5..3825cf5fa88 100644 --- a/website/docs/docs/collaborate/git-version-control.md +++ b/website/docs/docs/collaborate/git-version-control.md @@ -22,3 +22,4 @@ When you develop in the command line interface (CLI) or Cloud integrated develo - [Merge conflicts](/docs/collaborate/git/merge-conflicts) - [Connect to GitHub](/docs/cloud/git/connect-github) - [Connect to GitLab](/docs/cloud/git/connect-gitlab) +- [Connect to Azure DevOps](/docs/cloud/git/connect-azure-devops) diff --git a/website/docs/docs/collaborate/govern/model-access.md b/website/docs/docs/collaborate/govern/model-access.md index 64b70416a2f..765e833ac0c 100644 --- a/website/docs/docs/collaborate/govern/model-access.md +++ b/website/docs/docs/collaborate/govern/model-access.md @@ -25,7 +25,7 @@ The two concepts will be closely related, as we develop multi-project collaborat ## Related documentation * [`groups`](/docs/build/groups) -* [`access`](/reference/resource-properties/access) +* [`access`](/reference/resource-configs/access) ## Groups diff --git a/website/docs/docs/collaborate/govern/project-dependencies.md b/website/docs/docs/collaborate/govern/project-dependencies.md index 1dbc967e74e..7785e428678 100644 --- a/website/docs/docs/collaborate/govern/project-dependencies.md +++ b/website/docs/docs/collaborate/govern/project-dependencies.md @@ -100,3 +100,11 @@ There are a few cases where installing another internal project as a package can - Coordinated changes — In development, if you wanted to test the effects of a change to a public model in an upstream project (`jaffle_finance.monthly_revenue`) on a downstream model (`jaffle_marketing.roi_by_channel`) _before_ introducing changes to a staging or production environment, you can install the `jaffle_finance` package as a package within `jaffle_marketing`. The installation can point to a specific git branch, however, if you find yourself frequently needing to perform end-to-end testing across both projects, we recommend you re-examine if this represents a stable interface boundary. These are the exceptions, rather than the rule. Installing another team's project as a package adds complexity, latency, and risk of unnecessary costs. By defining clear interface boundaries across teams, by serving one team's public models as "APIs" to another, and by enabling practitioners to develop with a more narrowly-defined scope, we can enable more people to contribute, with more confidence, while requiring less context upfront. + +## FAQs + +
+Can I define private packages in the dependencies.yml file? + +If you're using private packages with the [git token method](/docs/build/packages#git-token-method), you must define them in the `packages.yml` file instead of the `dependencies.yml` file. This is because conditional rendering (like Jinja-in-yaml) is not supported. +
diff --git a/website/docs/docs/core/connect-data-platform/trino-setup.md b/website/docs/docs/core/connect-data-platform/trino-setup.md index 396634dc6e6..39d8ed8ab3f 100644 --- a/website/docs/docs/core/connect-data-platform/trino-setup.md +++ b/website/docs/docs/core/connect-data-platform/trino-setup.md @@ -83,7 +83,7 @@ The following profile fields are optional to set up. They let you configure your | Profile field | Example | Description | | ----------------------------- | -------------------------------- | ----------------------------------------------------------------------------------------------------------- | | `threads` | `8` | How many threads dbt should use (default is `1`) | -| `roles` | `system: analyst` | Catalog roles | +| `roles` | `system: analyst` | Catalog roles can be set under the optional `roles` parameter using the following format: `catalog: role`. | | `session_properties` | `query_max_run_time: 4h` | Sets Trino session properties used in the connection. Execute `SHOW SESSION` to see available options | | `prepared_statements_enabled` | `true` or `false` | Enable usage of Trino prepared statements (used in `dbt seed` commands) (default: `true`) | | `retries` | `10` | Configure how many times all database operation is retried when connection issues arise (default: `3`) | diff --git a/website/docs/docs/core/pip-install.md b/website/docs/docs/core/pip-install.md index a35ad5f0d77..44fac00e493 100644 --- a/website/docs/docs/core/pip-install.md +++ b/website/docs/docs/core/pip-install.md @@ -5,7 +5,7 @@ description: "You can use pip to install dbt Core and adapter plugins from the c You need to use `pip` to install dbt Core on Windows or Linux operating systems. You can use `pip` or [Homebrew](/docs/core/homebrew-install) for installing dbt Core on a MacOS. -You can install dbt Core and plugins using `pip` because they are Python modules distributed on [PyPI](https://pypi.org/project/dbt/). +You can install dbt Core and plugins using `pip` because they are Python modules distributed on [PyPI](https://pypi.org/project/dbt-core/). diff --git a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md index 02d26229794..5a8da0a3f48 100644 --- a/website/docs/docs/dbt-cloud-apis/sl-jdbc.md +++ b/website/docs/docs/dbt-cloud-apis/sl-jdbc.md @@ -5,7 +5,6 @@ description: "Integrate and use the JDBC API to query your metrics." tags: [Semantic Layer, API] --- - import LegacyInfo from '/snippets/_legacy-sl-callout.md'; @@ -59,11 +58,13 @@ jdbc:arrow-flight-sql://semantic-layer.cloud.getdbt.com:443?&environmentId=20233 ## Querying the API for metric metadata -The Semantic Layer JDBC API has built-in metadata calls which can provide a user with information about their metrics and dimensions. Here are some metadata commands and examples: +The Semantic Layer JDBC API has built-in metadata calls which can provide a user with information about their metrics and dimensions. + +Refer to the following tabs for metadata commands and examples: - + Use this query to fetch all defined metrics in your dbt project: @@ -74,7 +75,7 @@ select * from {{ ``` - + Use this query to fetch all dimensions for a metric. @@ -87,7 +88,7 @@ select * from {{ - + Use this query to fetch dimension values for one or multiple metrics and single dimension. @@ -100,7 +101,7 @@ semantic_layer.dimension_values(metrics=['food_order_amount'], group_by=['custom - + Use this query to fetch queryable granularities for a list of metrics. This API request allows you to only show the time granularities that make sense for the primary time dimension of the metrics (such as `metric_time`), but if you want queryable granularities for other time dimensions, you can use the `dimensions()` call, and find the column queryable_granularities. @@ -113,6 +114,9 @@ select * from {{ + + + @@ -144,9 +148,10 @@ select NAME, QUERYABLE_GRANULARITIES from {{ - + It may be useful in your application to expose the names of the time dimensions that represent `metric_time` or the common thread across all metrics. + You can first query the `metrics()` argument to fetch a list of measures, then use the `measures()` call which will return the name(s) of the time dimensions that make up metric time. ```bash @@ -167,12 +172,13 @@ To query metric values, here are the following parameters that are available: | `metrics` | The metric name as defined in your dbt metric configuration | `metrics=['revenue']` | Required | | `group_by` | Dimension names or entities to group by. We require a reference to the entity of the dimension (other than for the primary time dimension), which is pre-appended to the front of the dimension name with a double underscore. | `group_by=['user__country', 'metric_time']` | Optional | | `grain` | A parameter specific to any time dimension and changes the grain of the data from the default for the metric. | `group_by=[Dimension('metric_time')`
`grain('week\|day\|month\|quarter\|year')]` | Optional | -| `where` | A where clause that allows you to filter on dimensions and entities using parameters - comes with `TimeDimension`, `Dimension`, and `Entity` objects. Granularity is required with `TimeDimension` | `"{{ where=Dimension('customer__country') }} = 'US')"` | Optional | +| `where` | A where clause that allows you to filter on dimensions and entities using parameters. This takes a filter list OR string. Inputs come with `Dimension`, and `Entity` objects. Granularity is required if the `Dimension` is a time dimension | `"{{ where=Dimension('customer__country') }} = 'US')"` | Optional | | `limit` | Limit the data returned | `limit=10` | Optional | |`order` | Order the data returned | `order_by=['-order_gross_profit']` (remove `-` for ascending order) | Optional | | `compile` | If true, returns generated SQL for the data platform but does not execute | `compile=True` | Optional | + ## Note on time dimensions and `metric_time` You will notice that in the list of dimensions for all metrics, there is a dimension called `metric_time`. `Metric_time` is a reserved keyword for the measure-specific aggregation time dimensions. For any time-series metric, the `metric_time` keyword should always be available for use in queries. This is a common dimension across *all* metrics in a semantic graph. @@ -246,13 +252,13 @@ select * from {{ Where filters in API allow for a filter list or string. We recommend using the filter list for production applications as this format will realize all benefits from the where possible. -Where filters have the following components that you can use: +Where Filters have a few objects that you can use: - `Dimension()` - This is used for any categorical or time dimensions. If used for a time dimension, granularity is required - `Dimension('metric_time').grain('week')` or `Dimension('customer__country')` -- `TimeDimension()` - This is used for all time dimensions and requires a granularity argument - `TimeDimension('metric_time', 'MONTH)` +- `Entity()` - Used for entities like primary and foreign keys - `Entity('order_id')` -- `Entity()` - This is used for entities like primary and foreign keys - `Entity('order_id')` +Note: If you prefer a more explicit path to create the `where` clause, you can optionally use the `TimeDimension` feature. This helps separate out categorical dimensions from time-related ones. The `TimeDimesion` input takes the time dimension name and also requires granularity, like this: `TimeDimension('metric_time', 'MONTH')`. Use the following example to query using a `where` filter with the string format: @@ -261,7 +267,7 @@ Use the following example to query using a `where` filter with the string format select * from {{ semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'], group_by=[Dimension('metric_time').grain('month'),'customer__customer_type'], -where="{{ TimeDimension('metric_time', 'MONTH') }} >= '2017-03-09' AND {{ Dimension('customer__customer_type' }} in ('new') AND {{ Entity('order_id') }} = 10") +where="{{ Dimension('metric_time').grain('month') }} >= '2017-03-09' AND {{ Dimension('customer__customer_type' }} in ('new') AND {{ Entity('order_id') }} = 10") }} ``` @@ -271,7 +277,7 @@ Use the following example to query using a `where` filter with a filter list for select * from {{ semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'], group_by=[Dimension('metric_time').grain('month'),'customer__customer_type'], -where=[{{ TimeDimension('metric_time', 'MONTH')}} >= '2017-03-09', {{ Dimension('customer__customer_type' }} in ('new'), {{ Entity('order_id') }} = 10]) +where=[{{ Dimension('metric_time').grain('month') }} >= '2017-03-09', {{ Dimension('customer__customer_type' }} in ('new'), {{ Entity('order_id') }} = 10]) }} ``` @@ -287,6 +293,7 @@ semantic_layer.query(metrics=['food_order_amount', 'order_gross_profit'], order_by=['order_gross_profit']) }} ``` + ### Query with compile keyword Use the following example to query using a `compile` keyword: diff --git a/website/docs/docs/dbt-cloud-apis/user-tokens.md b/website/docs/docs/dbt-cloud-apis/user-tokens.md index e56d8b2f974..f0d694f5edd 100644 --- a/website/docs/docs/dbt-cloud-apis/user-tokens.md +++ b/website/docs/docs/dbt-cloud-apis/user-tokens.md @@ -13,7 +13,7 @@ permissions of the user the that they were created for. You can find your User API token in the Profile page under the `API Access` label. - + ## FAQs diff --git a/website/docs/docs/dbt-versions/release-notes/03-Oct-2023/custom-branch-fix-rn.md b/website/docs/docs/dbt-versions/release-notes/03-Oct-2023/custom-branch-fix-rn.md new file mode 100644 index 00000000000..06550b7d863 --- /dev/null +++ b/website/docs/docs/dbt-versions/release-notes/03-Oct-2023/custom-branch-fix-rn.md @@ -0,0 +1,14 @@ +--- +title: "Fix: Default behavior for CI job runs without a custom branch" +description: "October 2023: CI job runs now default to the main branch of the Git repository when a custom branch isn't set" +sidebar_label: "Fix: Default behavior for CI job runs without a custom branch" +tags: [Oct-2023, CI] +date: 2023-10-06 +sidebar_position: 08 +--- + +If you don't set a [custom branch](/docs/dbt-cloud-environments#custom-branch-behavior) for your dbt Cloud environment, it now defaults to the default branch of your Git repository (for example, `main`). Previously, [CI jobs](/docs/deploy/ci-jobs) would run for pull requests (PRs) that were opened against _any branch_ or updated with new commits if the **Custom Branch** option wasn't set. + +## Azure DevOps + +Your Git pull requests (PRs) might not trigger against your default branch if you're using Azure DevOps and the default branch isn't `main` or `master`. To resolve this, [set up a custom branch](/faqs/Environments/custom-branch-settings) with the branch you want to target. diff --git a/website/docs/docs/dbt-versions/release-notes/03-Oct-2023/explorer-public-preview-rn.md b/website/docs/docs/dbt-versions/release-notes/03-Oct-2023/explorer-public-preview-rn.md new file mode 100644 index 00000000000..ebf5add8d03 --- /dev/null +++ b/website/docs/docs/dbt-versions/release-notes/03-Oct-2023/explorer-public-preview-rn.md @@ -0,0 +1,13 @@ +--- +title: "New: dbt Explorer Public Preview" +description: "October 2023: dbt Explorer is now available in Public Preview. You can use it to understand, improve, and leverage your dbt projects." +sidebar_label: "New: dbt Explorer Public Preview" +tags: [Oct-2023, Explorer] +date: 2023-10-13 +sidebar_position: 07 +--- + +On Oct 17, 2023, a Public Preview of dbt Explorer will become available to dbt Cloud customers. With dbt Explorer, you can view your project's resources (such as models, tests, and metrics) and their lineage — including interactive DAGs — to gain a better understanding of its latest production state. Navigate and manage your projects within dbt Cloud to help you and other data developers, analysts, and consumers discover and leverage your dbt resources. + +For details, refer to [Explore your dbt projects](/docs/collaborate/explore-projects). + diff --git a/website/docs/docs/dbt-versions/release-notes/03-Oct-2023/native-retry-support-rn.md b/website/docs/docs/dbt-versions/release-notes/03-Oct-2023/native-retry-support-rn.md new file mode 100644 index 00000000000..20e56879940 --- /dev/null +++ b/website/docs/docs/dbt-versions/release-notes/03-Oct-2023/native-retry-support-rn.md @@ -0,0 +1,15 @@ +--- +title: "Enhancement: Native support for the dbt retry command" +description: "October 2023: Rerun errored jobs from start or from the failure point" +sidebar_label: "Enhancement: Support for dbt retry" +tags: [Oct-2023, Scheduler] +date: 2023-10-06 +sidebar_position: 10 +--- + +Previously in dbt Cloud, you could only rerun an errored job from start but now you can also rerun it from its point of failure. + +You can view which job failed to complete successully, which command failed in the run step, and choose how to rerun it. To learn more, refer to [Retry jobs](/docs/deploy/retry-jobs). + + + diff --git a/website/docs/docs/dbt-versions/release-notes/03-Oct-2023/product-docs-sept-rn.md b/website/docs/docs/dbt-versions/release-notes/03-Oct-2023/product-docs-sept-rn.md new file mode 100644 index 00000000000..e669b037d17 --- /dev/null +++ b/website/docs/docs/dbt-versions/release-notes/03-Oct-2023/product-docs-sept-rn.md @@ -0,0 +1,38 @@ +--- +title: "September 2023 product docs updates" +id: "product-docs-sept" +description: "September 2023: The Product docs team merged 107 PRs, made various updates to dbt Cloud and Core, such as GAing continuous integration jobs, Semantic Layer GraphQL API doc, a new community plugin, and more" +sidebar_label: "Update: Product docs changes" +tags: [Sept-2023, product-docs] +date: 2023-10-10 +sidebar_position: 09 +--- + +Hello from the dbt Docs team: @mirnawong1, @matthewshaver, @nghi-ly, and @runleonarun! First, we’d like to thank the 15 new community contributors to docs.getdbt.com. We merged [107 PRs](https://github.com/dbt-labs/docs.getdbt.com/pulls?q=is%3Apr+merged%3A2023-09-01..2023-09-31) in September. + +Here's what's new to [docs.getdbt.com](http://docs.getdbt.com/): + +* Migrated docs.getdbt.com from Netlify to Vercel. + +## ☁ Cloud projects +- Continuous integration jobs are now generally available and no longer in beta! +- Added [Postgres PrivateLink set up page](/docs/cloud/secure/postgres-privatelink) +- Published beta docs for [dbt Explorer](/docs/collaborate/explore-projects). +- Added a new Semantic Layer [GraphQL API doc](/docs/dbt-cloud-apis/sl-graphql) and updated the [integration docs](/docs/use-dbt-semantic-layer/avail-sl-integrations) to include Hex. Responded to dbt community feedback and clarified Metricflow use cases for dbt Core and dbt Cloud. +- Added an [FAQ](/faqs/Git/git-migration) describing how to migrate from one git provider to another in dbt Cloud. +- Clarified an example and added a [troubleshooting section](/docs/cloud/connect-data-platform/connect-snowflake#troubleshooting) to Snowflake connection docs to address common errors and provide solutions. + + +## 🎯 Core projects + +- Deprecated dbt Core v1.0 and v1.1 from the docs. +- Added configuration instructions for the [AWS Glue](/docs/core/connect-data-platform/glue-setup) community plugin. +- Revised the dbt Core quickstart, making it easier to follow. Divided this guide into steps that align with the [other guides](/quickstarts/manual-install?step=1). + +## New 📚 Guides, ✏️ blog posts, and FAQs + +Added a [style guide template](/guides/best-practices/how-we-style/6-how-we-style-conclusion#style-guide-template) that you can copy & paste to make sure you adhere to best practices when styling dbt projects! + +## Upcoming changes + +Stay tuned for a flurry of releases in October and a filterable guides section that will make guides easier to find! diff --git a/website/docs/docs/deploy/ci-jobs.md b/website/docs/docs/deploy/ci-jobs.md index fb603e2864e..d10bc780fc2 100644 --- a/website/docs/docs/deploy/ci-jobs.md +++ b/website/docs/docs/deploy/ci-jobs.md @@ -27,6 +27,7 @@ To make CI job creation easier, many options on the **CI job** page are set to d - **Job Name** — Specify the name for this CI job. - **Environment** — By default, it’s set to the environment you created the CI job from. - **Triggered by pull requests** — By default, it’s enabled. Every time a developer opens up a pull request or pushes a commit to an existing pull request, this job will get triggered to run. + - **Run on Draft Pull Request** — Enable this option if you want to also trigger the job to run every time a developer opens up a draft pull request or pushes a commit to that draft pull request. 3. Options in the **Execution Settings** section: - **Commands** — By default, it includes the `dbt build --select state:modified+` command. This informs dbt Cloud to build only new or changed models and their downstream dependents. Importantly, state comparison can only happen when there is a deferred environment selected to compare state to. Click **Add command** to add more [commands](/docs/deploy/job-commands) that you want to be invoked when this job runs. @@ -62,13 +63,13 @@ If you're not using dbt Cloud’s native Git integration with [GitHub](/docs/cl 1. Set up a CI job with the [Create Job](/dbt-cloud/api-v2#/operations/Create%20Job) API endpoint using `"job_type": ci` or from the [dbt Cloud UI](#set-up-ci-jobs). -1. Call the [Trigger Job Run](/dbt-cloud/api-v2#/operations/Trigger%20Job%20Run) API endpoint to trigger the CI job. Provide the pull request (PR) ID to the payload using one of these fields, even if you're using a different Git provider (like Bitbucket): +1. Call the [Trigger Job Run](/dbt-cloud/api-v2#/operations/Trigger%20Job%20Run) API endpoint to trigger the CI job. You must include these fields to the payload: + - Provide the pull request (PR) ID with one of these fields, even if you're using a different Git provider (like Bitbucket). This can make your code less human-readable but it will _not_ affect dbt functionality. - - `github_pull_request_id` - - `gitlab_merge_request_id` - - `azure_devops_pull_request_id`  - - This can make your code less human-readable but it will _not_ affect dbt functionality. + - `github_pull_request_id` + - `gitlab_merge_request_id` + - `azure_devops_pull_request_id`  + - Provide the `git_sha` or `git_branch` to target the correct commit or branch to run the job against. ## Example pull requests @@ -94,10 +95,18 @@ If you're experiencing any issues, review some of the common questions and answe
Temporary schemas aren't dropping
-
If your temporary schemas aren't dropping after a PR merges or closes, this typically indicates you have overridden the generate_schema_name macro and it isn't using dbt_cloud_pr_ as the prefix.



To resolve this, change your macro so that the temporary PR schema name contains the required prefix. For example: +
If your temporary schemas aren't dropping after a PR merges or closes, this typically indicates one of these issues: +
    +
  • You have overridden the generate_schema_name macro and it isn't using dbt_cloud_pr_ as the prefix.



    To resolve this, change your macro so that the temporary PR schema name contains the required prefix. For example:



    - • ✅ Temporary PR schema name contains the prefix dbt_cloud_pr_ (like dbt_cloud_pr_123_456_marketing)

    - • ❌ Temporary PR schema name doesn't contain the prefix dbt_cloud_pr_ (like marketing).

    + ✅ Temporary PR schema name contains the prefix dbt_cloud_pr_ (like dbt_cloud_pr_123_456_marketing).

    + ❌ Temporary PR schema name doesn't contain the prefix dbt_cloud_pr_ (like marketing).

    +
  • +
    +
  • + A macro is creating a schema but there are no dbt models writing to that schema. dbt Cloud doesn't drop temporary schemas that weren't written to as a result of running a dbt model. +
  • +
@@ -153,6 +162,3 @@ If you're experiencing any issues, review some of the common questions and answe If you're on a Virtual Private dbt Enterprise plan using security features like ingress PrivateLink or IP Allowlisting, registering CI hooks may not be available and can cause the job to fail silently. - - - diff --git a/website/docs/docs/deploy/continuous-integration.md b/website/docs/docs/deploy/continuous-integration.md index cc856f97f22..0f87965aada 100644 --- a/website/docs/docs/deploy/continuous-integration.md +++ b/website/docs/docs/deploy/continuous-integration.md @@ -16,7 +16,7 @@ Using CI helps: ## How CI works -When you [set up CI jobs](/docs/deploy/ci-jobs#set-up-ci-jobs), dbt Cloud listens for webhooks from your Git provider indicating that a new PR has been opened or updated with new commits. When dbt Cloud receives one of these webhooks, it enqueues a new run of the CI job. If you want CI checks to run on each new commit, you need to mark your PR as **Ready for review** in your Git provider — draft PRs _don't_ trigger CI jobs. +When you [set up CI jobs](/docs/deploy/ci-jobs#set-up-ci-jobs), dbt Cloud listens for webhooks from your Git provider indicating that a new PR has been opened or updated with new commits. When dbt Cloud receives one of these webhooks, it enqueues a new run of the CI job. dbt Cloud builds and tests the models affected by the code change in a temporary schema, unique to the PR. This process ensures that the code builds without error and that it matches the expectations as defined by the project's dbt tests. The unique schema name follows the naming convention `dbt_cloud_pr__` (for example, `dbt_cloud_pr_1862_1704`) and can be found in the run details for the given run, as shown in the following image: diff --git a/website/docs/docs/deploy/deployment-overview.md b/website/docs/docs/deploy/deployment-overview.md index 5883ecaa3f1..553dca923a5 100644 --- a/website/docs/docs/deploy/deployment-overview.md +++ b/website/docs/docs/deploy/deployment-overview.md @@ -58,6 +58,12 @@ Learn how to use dbt Cloud's features to help your team ship timely and quality link="/docs/deploy/run-visibility" icon="dbt-bit"/> + + + +## Related content +- [Retry a failed run for a job](/dbt-cloud/api-v2#/operations/Retry%20a%20failed%20run%20for%20a%20job) API endpoint +- [Run visibility](/docs/deploy/run-visibility) +- [Jobs](/docs/deploy/jobs) +- [Job commands](/docs/deploy/job-commands) \ No newline at end of file diff --git a/website/docs/docs/supported-data-platforms.md b/website/docs/docs/supported-data-platforms.md index 8ac782991c8..4da59cf0991 100644 --- a/website/docs/docs/supported-data-platforms.md +++ b/website/docs/docs/supported-data-platforms.md @@ -12,6 +12,10 @@ You can [connect](/docs/connect-adapters) to adapters and data platforms either You can also further customize how dbt works with your specific data platform via configuration: see [Configuring Postgres](/reference/resource-configs/postgres-configs) for an example. +import MSCallout from '/snippets/_microsoft-adapters-soon.md'; + + + ## Types of Adapters There are three types of adapters available today: diff --git a/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md b/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md index ea5833d586b..aa464aced4a 100644 --- a/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md +++ b/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md @@ -26,11 +26,11 @@ import AvailIntegrations from '/snippets/_sl-partner-links.md'; ## Custom integration -- You can create custom integrations using different languages and tools. We support connecting with JDBC, ADBC, and a GraphQL APIs. For more info, check out [our examples on GitHub](https://github.com/dbt-labs/example-semantic-layer-clients/). +- You can create custom integrations using different languages and tools. We support connecting with JDBC, ADBC, and GraphQL APIs. For more info, check out [our examples on GitHub](https://github.com/dbt-labs/example-semantic-layer-clients/). - You can also connect to tools that allow you to write SQL. These tools must meet one of the two criteria: - Supports a generic JDBC driver option (such as DataGrip) or - - Supports Dremio and uses ArrowFlightSQL driver version 12.0.0 or higher. + - Uses Arrow Flight SQL JDBC driver version 12.0.0 or higher. ## Related docs diff --git a/website/docs/docs/use-dbt-semantic-layer/gsheets.md b/website/docs/docs/use-dbt-semantic-layer/gsheets.md index 8ddaae0364c..c4480f63923 100644 --- a/website/docs/docs/use-dbt-semantic-layer/gsheets.md +++ b/website/docs/docs/use-dbt-semantic-layer/gsheets.md @@ -51,3 +51,10 @@ To use the filter functionality, choose the dimension you want to filter by and - If it's a categorical dimension, type in the dimension value you want to filter by (no quotes needed) and press enter. - Continue adding additional filters as needed with AND and OR. If it's a time dimension, choose the operator and select from the calendar. + + +**Limited Use Policy Disclosure** + +The dbt Semantic Layer for Sheet's use and transfer to any other app of information received from Google APIs will adhere to [Google API Services User Data Policy](https://developers.google.com/terms/api-services-user-data-policy), including the Limited Use requirements. + + diff --git a/website/docs/docs/verified-adapters.md b/website/docs/docs/verified-adapters.md index a2d28a612d6..170bc8f885b 100644 --- a/website/docs/docs/verified-adapters.md +++ b/website/docs/docs/verified-adapters.md @@ -13,6 +13,10 @@ The verification process serves as the on-ramp to integration with dbt Cloud. As To learn more, see [Verifying a new adapter](/guides/dbt-ecosystem/adapter-development/7-verifying-a-new-adapter). +import MSCallout from '/snippets/_microsoft-adapters-soon.md'; + + + Here are the verified data platforms that connect to dbt and its latest version. import AdaptersVerified from '/snippets/_adapters-verified.md'; diff --git a/website/docs/faqs/Models/reference-models-in-another-project.md b/website/docs/faqs/Models/reference-models-in-another-project.md deleted file mode 100644 index 19f3f52da31..00000000000 --- a/website/docs/faqs/Models/reference-models-in-another-project.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: How can I reference models or macros in another project? -description: "Use packages to add another project to your dbt project" -sidebar_label: 'Reference models or macros in another project' -id: reference-models-in-another-project - ---- - -You can use [packages](/docs/build/packages) to add another project to your dbt -project, including other projects you've created. Check out the [docs](/docs/build/packages) -for more information! diff --git a/website/docs/guides/migration/sl-migration.md b/website/docs/guides/migration/sl-migration.md index c9def4537a3..5ff4c72d89c 100644 --- a/website/docs/guides/migration/sl-migration.md +++ b/website/docs/guides/migration/sl-migration.md @@ -65,8 +65,8 @@ This step is only relevant to users who want the legacy and new semantic layer t 1. Create a new deployment environment in dbt Cloud and set the dbt version to 1.6 or higher. 2. Choose `Only run on a custom branch` and point to the branch that has the updated metric definition 3. Set the deployment schema to a temporary migration schema, such as `tmp_sl_migration`. Optional, you can create a new database for the migration. -4. Create a job to parse your project, such as `dbt parse`, and run it. Make sure this job succeeds, There needs to be a successful job in your environment in order to set up the semantic layer -5. In Account Settings > Projects > Project details click `Configure the Semantic Layer`. Under **Environment**select the deployment environment you created in the previous step. Save your configuration. +4. Create a job to parse your project, such as `dbt parse`, and run it. Make sure this job succeeds, there needs to be a successful job in your environment in order to set up the semantic layer +5. In Account Settings > Projects > Project details click `Configure the Semantic Layer`. Under **Environment**, select the deployment environment you created in the previous step. Save your configuration. 6. In the Project details page, click `Generate service token` and grant it `Semantic Layer Only` and `Metadata Only` permissions. Save this token securely - you will need it to connect to the semantic layer. At this point, both the new semantic layer and the old semantic layer will be running. The new semantic layer will be pointing at your migration branch with the updated metrics definitions. diff --git a/website/docs/guides/migration/versions/00-upgrading-to-v1.7.md b/website/docs/guides/migration/versions/00-upgrading-to-v1.7.md index 036c734dfb1..f350e8955f7 100644 --- a/website/docs/guides/migration/versions/00-upgrading-to-v1.7.md +++ b/website/docs/guides/migration/versions/00-upgrading-to-v1.7.md @@ -8,7 +8,7 @@ description: New features and changes in dbt Core v1.7 - [Changelog](https://github.com/dbt-labs/dbt-core/blob/8aaed0e29f9560bc53d9d3e88325a9597318e375/CHANGELOG.md) - [CLI Installation guide](/docs/core/installation) - [Cloud upgrade guide](/docs/dbt-versions/upgrade-core-in-cloud) -- [Release schedule](https://github.com/dbt-labs/dbt-core/issues/7481) +- [Release schedule](https://github.com/dbt-labs/dbt-core/issues/8260) ## What to know before upgrading @@ -16,9 +16,49 @@ dbt Labs is committed to providing backward compatibility for all versions 1.x, ### Behavior changes -**COMING SOON** +dbt Core v1.7 expands the amount of sources you can configure freshness for. Previously, freshness was limited to sources with a `loaded_at_field`; now, freshness can be generated from warehouse metadata tables when available. -### Quick hits +As part of this change, the `loaded_at_field` is no longer required to generate source freshness. If a source has a `freshness:` block, dbt will attempt to calculate freshness for that source: +- If a `loaded_at_field` is provided, dbt will calculate freshness via a select query (previous behavior). +- If a `loaded_at_field` is _not_ provided, dbt will calculate freshness via warehouse metadata tables when possible (new behavior). + +This is a relatively small behavior change, but worth calling out in case you notice that dbt is calculating freshness for _more_ sources than before. To exclude a source from freshness calculations, you have two options: +- Don't add a `freshness:` block. +- Explicitly set `freshness: null` + +## New and changed features and functionality + +- [`dbt docs generate`](/reference/commands/cmd-docs) now supports `--select` to generate [catalog metadata](/reference/artifacts/catalog-json) for a subset of your project. Currently available for Snowflake and Postgres only, but other adapters are coming soon. +- [Source freshness](/docs/deploy/source-freshness) can now be generated from warehouse metadata tables, currently Snowflake only, but other adapters that have metadata tables are coming soon. + +### MetricFlow enhancements + +- Automatically create metrics on measures with [`create_metric: true`](/docs/build/semantic-models). +- Optional [`label`](/docs/build/semantic-models) in semantic_models, measures, dimensions and entities. +- New configurations for semantic models - [enable/disable](/reference/resource-configs/enabled), [group](/reference/resource-configs/group), and [meta](/reference/resource-configs/meta). +- Support `fill_nulls_with` and `join_to_timespine` for metric nodes. +- `saved_queries` extends governance beyond the semantic objects to their consumption. -**COMING SOON** +### For consumers of dbt artifacts (metadata) + +- The [manifest](/reference/artifacts/manifest-json) schema version has been updated to v11. +- The [run_results](/reference/artifacts/run-results-json) schema version has been updated to v5. +- There are a few specific changes to the [catalog.json](/reference/artifacts/catalog-json): + - Added [node attributes](/reference/artifacts/run-results-json) related to compilation (`compiled`, `compiled_code`, `relation_name`) to the `catalog.json`. + - The nodes dictionary in the `catalog.json` can now be "partial" if `dbt docs generate` is run with a selector. + +### Model governance + +dbt Core v1.5 introduced model governance which we're continuing to refine. v1.7 includes these additional features and functionality: + +- **[Breaking change detection](/reference/resource-properties/versions#detecting-breaking-changes) for models with contracts enforced:** When dbt detects a breaking change to a model with an enforced contract during state comparison, it will now raise an error for versioned models and a warning for models that are not versioned. +- **[Set `access` as a config](/reference/resource-configs/access):** You can now set a model's `access` within config blocks in the model's file or in the `dbt_project.yml` for an entire subfolder at once. +- **[Type aliasing for model contracts](/reference/resource-configs/contract):** dbt will use each adapter's built-in type aliasing for user-provided data types—meaning you can now write `string` always, and dbt will translate to `text` on Postgres/Redshift. This is "on" by default, but you can opt-out. +- **[Raise warning for numeric types](/reference/resource-configs/contract):** Because of issues when putting `numeric` in model contracts without considering that default values such as `numeric(38,0)` might round decimals accordingly. dbt will now warn you if it finds a numeric type without specified precision/scale. + +### Quick hits +With these quick hits, you can now: +- Configure a [`delimiter`](/reference/resource-configs/delimiter) for a seed file. +- Use packages with the same git repo and unique subdirectory. +- Access the `date_spine` macro directly from dbt-core (moved over from dbt-utils). diff --git a/website/docs/guides/migration/versions/01-upgrading-to-v1.6.md b/website/docs/guides/migration/versions/01-upgrading-to-v1.6.md index a3ebc947aaf..50b0ca8bc58 100644 --- a/website/docs/guides/migration/versions/01-upgrading-to-v1.6.md +++ b/website/docs/guides/migration/versions/01-upgrading-to-v1.6.md @@ -90,4 +90,5 @@ More consistency and flexibility around packages. Resources defined in a package - [`dbt debug --connection`](/reference/commands/debug) to test just the data platform connection specified in a profile - [`dbt docs generate --empty-catalog`](/reference/commands/cmd-docs) to skip catalog population while generating docs - [`--defer-state`](/reference/node-selection/defer) enables more-granular control +- [`dbt ls`](/reference/commands/list) adds the Semantic model selection method to allow for `dbt ls -s "semantic_model:*"` and the ability to execute `dbt ls --resource-type semantic_model`. diff --git a/website/docs/guides/orchestration/airflow-and-dbt-cloud/1-airflow-and-dbt-cloud.md b/website/docs/guides/orchestration/airflow-and-dbt-cloud/1-airflow-and-dbt-cloud.md index d453106eead..d6760771b79 100644 --- a/website/docs/guides/orchestration/airflow-and-dbt-cloud/1-airflow-and-dbt-cloud.md +++ b/website/docs/guides/orchestration/airflow-and-dbt-cloud/1-airflow-and-dbt-cloud.md @@ -19,7 +19,7 @@ There are [so many great examples](https://gitlab.com/gitlab-data/analytics/-/bl ### Airflow + dbt Cloud API w/Custom Scripts -This has served as a bridge until the fabled Astronomer + dbt Labs-built dbt Cloud provider became generally available [here](https://registry.astronomer.io/providers/dbt-cloud?type=Sensors&utm_campaign=Monthly%20Product%20Updates&utm_medium=email&_hsmi=208603877&utm_content=208603877&utm_source=hs_email). +This has served as a bridge until the fabled Astronomer + dbt Labs-built dbt Cloud provider became generally available [here](https://registry.astronomer.io/providers/dbt%20Cloud/versions/latest). There are many different permutations of this over time: diff --git a/website/docs/quickstarts/manual-install-qs.md b/website/docs/quickstarts/manual-install-qs.md index 05336178ff6..678b78e940f 100644 --- a/website/docs/quickstarts/manual-install-qs.md +++ b/website/docs/quickstarts/manual-install-qs.md @@ -103,16 +103,16 @@ When developing locally, dbt connects to your using jaffle_shop: # this needs to match the profile in your dbt_project.yml file target: dev outputs: - dev: - type: bigquery - method: service-account - keyfile: /Users/BBaggins/.dbt/dbt-tutorial-project-331118.json # replace this with the full path to your keyfile - project: grand-highway-265418 # Replace this with your project id - dataset: dbt_bbagins # Replace this with dbt_your_name, e.g. dbt_bilbo - threads: 1 - timeout_seconds: 300 - location: US - priority: interactive + dev: + type: bigquery + method: service-account + keyfile: /Users/BBaggins/.dbt/dbt-tutorial-project-331118.json # replace this with the full path to your keyfile + project: grand-highway-265418 # Replace this with your project id + dataset: dbt_bbagins # Replace this with dbt_your_name, e.g. dbt_bilbo + threads: 1 + timeout_seconds: 300 + location: US + priority: interactive ```
diff --git a/website/docs/reference/artifacts/manifest-json.md b/website/docs/reference/artifacts/manifest-json.md index 5e8dcedd2d5..47a9849eda5 100644 --- a/website/docs/reference/artifacts/manifest-json.md +++ b/website/docs/reference/artifacts/manifest-json.md @@ -3,15 +3,9 @@ title: "Manifest JSON file" sidebar_label: "Manifest" --- -| dbt Core version | Manifest version | -|------------------|---------------------------------------------------------------| -| v1.6 | [v10](https://schemas.getdbt.com/dbt/manifest/v10/index.html) | -| v1.5 | [v9](https://schemas.getdbt.com/dbt/manifest/v9/index.html) | -| v1.4 | [v8](https://schemas.getdbt.com/dbt/manifest/v8/index.html) | -| v1.3 | [v7](https://schemas.getdbt.com/dbt/manifest/v7/index.html) | -| v1.2 | [v6](https://schemas.getdbt.com/dbt/manifest/v6/index.html) | -| v1.1 | [v5](https://schemas.getdbt.com/dbt/manifest/v5/index.html) | -| v1.0 | [v4](https://schemas.getdbt.com/dbt/manifest/v4/index.html) | +import ManifestVersions from '/snippets/_manifest-versions.md'; + + **Produced by:** Any command that parses your project. This includes all commands **except** [`deps`](/reference/commands/deps), [`clean`](/reference/commands/clean), [`debug`](/reference/commands/debug), [`init`](/reference/commands/init) diff --git a/website/docs/reference/commands/cmd-docs.md b/website/docs/reference/commands/cmd-docs.md index 754c5e93baf..bc4840464b8 100644 --- a/website/docs/reference/commands/cmd-docs.md +++ b/website/docs/reference/commands/cmd-docs.md @@ -19,6 +19,18 @@ The command is responsible for generating your project's documentation website b dbt docs generate ``` + + +Use the `--select` argument to limit the nodes included within `catalog.json`. When this flag is provided, step (3) will be restricted to the selected nodes. All other nodes will be excluded. Step (2) is unaffected. + +**Example**: +```shell +dbt docs generate --select +orders +``` + + + + Use the `--no-compile` argument to skip re-compilation. When this flag is provided, `dbt docs generate` will skip step (2) described above. **Example**: diff --git a/website/docs/reference/commands/init.md b/website/docs/reference/commands/init.md index 873647814ec..ac55717c0ec 100644 --- a/website/docs/reference/commands/init.md +++ b/website/docs/reference/commands/init.md @@ -17,10 +17,21 @@ Then, it will: - Create a new folder with your project name and sample files, enough to get you started with dbt - Create a connection profile on your local machine. The default location is `~/.dbt/profiles.yml`. Read more in [configuring your profile](/docs/core/connect-data-platform/connection-profiles). + + +When using `dbt init` to initialize your project, include the `--profile` flag to specify an existing `profiles.yml` as the `profile:` key to use instead of creating a new one. For example, `dbt init --profile`. + + + +If the profile does not exist in `profiles.yml` or the command is run inside an existing project, the command raises an error. + + + ## Existing project If you've just cloned or downloaded an existing dbt project, `dbt init` can still help you set up your connection profile so that you can start working quickly. It will prompt you for connection information, as above, and add a profile (using the `profile` name from the project) to your local `profiles.yml`, or create the file if it doesn't already exist. + ## profile_template.yml `dbt init` knows how to prompt for connection information by looking for a file named `profile_template.yml`. It will look for this file in two places: diff --git a/website/docs/reference/commands/list.md b/website/docs/reference/commands/list.md index 6084b3dec70..93a0b87dd93 100644 --- a/website/docs/reference/commands/list.md +++ b/website/docs/reference/commands/list.md @@ -10,7 +10,7 @@ The `dbt ls` command lists resources in your dbt project. It accepts selector ar ### Usage ``` dbt ls - [--resource-type {model,source,seed,snapshot,metric,test,exposure,analysis,default,all}] + [--resource-type {model,semantic_model,source,seed,snapshot,metric,test,exposure,analysis,default,all}] [--select SELECTION_ARG [SELECTION_ARG ...]] [--models SELECTOR [SELECTOR ...]] [--exclude SELECTOR [SELECTOR ...]] @@ -93,6 +93,16 @@ $ dbt ls --select snowplow.* --output json --output-keys name resource_type desc + + +**Listing Semantic models** + +List all resources upstream of your orders semantic model: +``` +dbt ls -s +semantic_model:orders +``` + + **Listing file paths** ``` diff --git a/website/docs/reference/commands/retry.md b/website/docs/reference/commands/retry.md index d494a46cf1f..8da5d5a77a6 100644 --- a/website/docs/reference/commands/retry.md +++ b/website/docs/reference/commands/retry.md @@ -4,14 +4,6 @@ sidebar_label: "retry" id: "retry" --- -:::info Support in dbt Cloud - -`dbt retry` is supported in the dbt Cloud IDE. - -Native support for restarting scheduled runs from point of failure is currently in development & coming soon. - -::: - `dbt retry` re-executes the last `dbt` command from the node point of failure. If the previously executed `dbt` command was successful, `retry` will finish as `no operation`. Retry works with the following commands: diff --git a/website/docs/reference/dbt-jinja-functions/model.md b/website/docs/reference/dbt-jinja-functions/model.md index e967debd01f..9ccf0759470 100644 --- a/website/docs/reference/dbt-jinja-functions/model.md +++ b/website/docs/reference/dbt-jinja-functions/model.md @@ -52,15 +52,9 @@ To view the structure of `models` and their definitions: Use the following table to understand how the versioning pattern works and match the Manifest version with the dbt version: -| dbt version | Manifest version | -| ----------- | ---------------- | -| `v1.5` | [Manifest v9](https://schemas.getdbt.com/dbt/manifest/v9/index.html) -| `v1.4` | [Manifest v8](https://schemas.getdbt.com/dbt/manifest/v8/index.html) -| `v1.3` | [Manifest v7](https://schemas.getdbt.com/dbt/manifest/v7/index.html) -| `v1.2` | [Manifest v6](https://schemas.getdbt.com/dbt/manifest/v6/index.html) -| `v1.1` | [Manifest v5](https://schemas.getdbt.com/dbt/manifest/v5/index.html) - +import ManifestVersions from '/snippets/_manifest-versions.md'; + ## Related docs diff --git a/website/docs/reference/model-properties.md b/website/docs/reference/model-properties.md index 730432c88af..63adc1f0d63 100644 --- a/website/docs/reference/model-properties.md +++ b/website/docs/reference/model-properties.md @@ -18,7 +18,7 @@ models: show: true | false [latest_version](/reference/resource-properties/latest_version): [deprecation_date](/reference/resource-properties/deprecation_date): - [access](/reference/resource-properties/access): private | protected | public + [access](/reference/resource-configs/access): private | protected | public [config](/reference/resource-properties/config): [](/reference/model-configs): [constraints](/reference/resource-properties/constraints): @@ -46,7 +46,7 @@ models: [description](/reference/resource-properties/description): [docs](/reference/resource-configs/docs): show: true | false - [access](/reference/resource-properties/access): private | protected | public + [access](/reference/resource-configs/access): private | protected | public [constraints](/reference/resource-properties/constraints): - [config](/reference/resource-properties/config): diff --git a/website/docs/reference/node-selection/methods.md b/website/docs/reference/node-selection/methods.md index 2647f3416a3..f476ed87ed1 100644 --- a/website/docs/reference/node-selection/methods.md +++ b/website/docs/reference/node-selection/methods.md @@ -321,7 +321,7 @@ Supported in v1.5 or newer. -The `access` method selects models based on their [access](/reference/resource-properties/access) property. +The `access` method selects models based on their [access](/reference/resource-configs/access) property. ```bash dbt list --select access:public # list all public models @@ -352,3 +352,18 @@ dbt list --select version:none # models that are *not* versioned ``` + +### The "semantic_model" method + +Supported in v1.6 or newer. + + + +The `semantic_model` method selects [semantic models](/docs/build/semantic-models). + +```bash +dbt list --select semantic_model:* # list all semantic models +dbt list --select +semantic_model:orders # list your semantic model named "orders" and all upstream resources +``` + + \ No newline at end of file diff --git a/website/docs/reference/node-selection/syntax.md b/website/docs/reference/node-selection/syntax.md index 7c165b0f4ff..924347492a1 100644 --- a/website/docs/reference/node-selection/syntax.md +++ b/website/docs/reference/node-selection/syntax.md @@ -14,6 +14,7 @@ dbt's node selection syntax makes it possible to run only specific resources in | [compile](/reference/commands/compile) | `--select`, `--exclude`, `--selector`, `--inline` | | [freshness](/reference/commands/source) | `--select`, `--exclude`, `--selector` | | [build](/reference/commands/build) | `--select`, `--exclude`, `--selector`, `--resource-type`, `--defer` | +| [docs generate](/reference/commands/cmd-docs) | `--select`, `--exclude`, `--selector`, `--defer` | :::info Nodes and resources diff --git a/website/docs/reference/node-selection/test-selection-examples.md b/website/docs/reference/node-selection/test-selection-examples.md index 52439d95d97..478aac932c5 100644 --- a/website/docs/reference/node-selection/test-selection-examples.md +++ b/website/docs/reference/node-selection/test-selection-examples.md @@ -19,14 +19,14 @@ Run generic tests only: ```bash - $ dbt test --select test_type:generic + dbt test --select test_type:generic ``` Run singular tests only: ```bash - $ dbt test --select test_type:singular + dbt test --select test_type:singular ``` In both cases, `test_type` checks a property of the test itself. These are forms of "direct" test selection. @@ -87,8 +87,8 @@ By default, a test will run when ANY parent is selected; we call this "eager" in In this mode, any test that depends on unbuilt resources will raise an error. ```shell -$ dbt test --select orders -$ dbt build --select orders + dbt test --select orders + dbt build --select orders ``` @@ -102,8 +102,8 @@ It will only include tests whose references are each within the selected nodes. Put another way, it will prevent tests from running if one or more of its parents is unselected. ```shell -$ dbt test --select orders --indirect-selection=cautious -$ dbt build --select orders --indirect-selection=cautious + dbt test --select orders --indirect-selection=cautious + dbt build --select orders --indirect-selection=cautious ``` @@ -122,8 +122,8 @@ By default, a test will run when ANY parent is selected; we call this "eager" in In this mode, any test that depends on unbuilt resources will raise an error. ```shell -$ dbt test --select orders -$ dbt build --select orders + dbt test --select orders + dbt build --select orders ``` @@ -137,8 +137,8 @@ It will only include tests whose references are each within the selected nodes. Put another way, it will prevent tests from running if one or more of its parents is unselected. ```shell -$ dbt test --select orders --indirect-selection=cautious -$ dbt build --select orders --indirect-selection=cautious + dbt test --select orders --indirect-selection=cautious + dbt build --select orders --indirect-selection=cautious ``` @@ -152,8 +152,8 @@ It will only include tests whose references are each within the selected nodes ( This is useful in the same scenarios as "cautious", but also includes when a test depends on a model **and** a direct ancestor of that model (like confirming an aggregation has the same totals as its input). ```shell -$ dbt test --select orders --indirect-selection=buildable -$ dbt build --select orders --indirect-selection=buildable + dbt test --select orders --indirect-selection=buildable + dbt build --select orders --indirect-selection=buildable ``` @@ -172,8 +172,8 @@ By default, a test will run when ANY parent is selected; we call this "eager" in In this mode, any test that depends on unbuilt resources will raise an error. ```shell -$ dbt test --select orders -$ dbt build --select orders + dbt test --select orders + dbt build --select orders ``` @@ -187,8 +187,8 @@ It will only include tests whose references are each within the selected nodes. Put another way, it will prevent tests from running if one or more of its parents is unselected. ```shell -$ dbt test --select orders --indirect-selection=cautious -$ dbt build --select orders --indirect-selection=cautious + dbt test --select orders --indirect-selection=cautious + dbt build --select orders --indirect-selection=cautious ``` @@ -202,8 +202,8 @@ It will only include tests whose references are each within the selected nodes ( This is useful in the same scenarios as "cautious", but also includes when a test depends on a model **and** a direct ancestor of that model (like confirming an aggregation has the same totals as its input). ```shell -$ dbt test --select orders --indirect-selection=buildable -$ dbt build --select orders --indirect-selection=buildable +dbt test --select orders --indirect-selection=buildable +dbt build --select orders --indirect-selection=buildable ``` @@ -213,8 +213,8 @@ $ dbt build --select orders --indirect-selection=buildable This mode will only include tests whose references are each within the selected nodes and will ignore all tests from attached nodes. ```shell -$ dbt test --select orders --indirect-selection=empty -$ dbt build --select orders --indirect-selection=empty + dbt test --select orders --indirect-selection=empty + dbt build --select orders --indirect-selection=empty ``` @@ -234,22 +234,25 @@ The following examples should feel somewhat familiar if you're used to executing ```bash # Run tests on a model (indirect selection) - $ dbt test --select customers + dbt test --select customers + + # Run tests on two or more specific models (indirect selection) + dbt test --select customers orders # Run tests on all models in the models/staging/jaffle_shop directory (indirect selection) - $ dbt test --select staging.jaffle_shop + dbt test --select staging.jaffle_shop # Run tests downstream of a model (note this will select those tests directly!) - $ dbt test --select stg_customers+ + dbt test --select stg_customers+ # Run tests upstream of a model (indirect selection) - $ dbt test --select +stg_customers + dbt test --select +stg_customers # Run tests on all models with a particular tag (direct + indirect) - $ dbt test --select tag:my_model_tag + dbt test --select tag:my_model_tag # Run tests on all models with a particular materialization (indirect selection) - $ dbt test --select config.materialized:table + dbt test --select config.materialized:table ``` @@ -258,16 +261,19 @@ The following examples should feel somewhat familiar if you're used to executing ```bash # tests on all sources - $ dbt test --select source:* + dbt test --select source:* # tests on one source - $ dbt test --select source:jaffle_shop + dbt test --select source:jaffle_shop + + # tests on two or more specific sources + dbt test --select source:jaffle_shop source:raffle_bakery # tests on one source table - $ dbt test --select source:jaffle_shop.customers + dbt test --select source:jaffle_shop.customers # tests on everything _except_ sources - $ dbt test --exclude source:* + dbt test --exclude source:* ``` ### More complex selection @@ -276,10 +282,10 @@ Through the combination of direct and indirect selection, there are many ways to ```bash - $ dbt test --select assert_total_payment_amount_is_positive # directly select the test by name - $ dbt test --select payments,test_type:singular # indirect selection, v1.2 - $ dbt test --select payments,test_type:data # indirect selection, v0.18.0 - $ dbt test --select payments --data # indirect selection, earlier versions + dbt test --select assert_total_payment_amount_is_positive # directly select the test by name + dbt test --select payments,test_type:singular # indirect selection, v1.2 + dbt test --select payments,test_type:data # indirect selection, v0.18.0 + dbt test --select payments --data # indirect selection, earlier versions ``` @@ -288,13 +294,13 @@ Through the combination of direct and indirect selection, there are many ways to ```bash # Run tests on all models with a particular materialization - $ dbt test --select config.materialized:table + dbt test --select config.materialized:table # Run tests on all seeds, which use the 'seed' materialization - $ dbt test --select config.materialized:seed + dbt test --select config.materialized:seed # Run tests on all snapshots, which use the 'snapshot' materialization - $ dbt test --select config.materialized:snapshot + dbt test --select config.materialized:snapshot ``` Note that this functionality may change in future versions of dbt. @@ -322,7 +328,7 @@ models: ```bash - $ dbt test --select tag:my_column_tag + dbt test --select tag:my_column_tag ``` Currently, tests "inherit" tags applied to columns, sources, and source tables. They do _not_ inherit tags applied to models, seeds, or snapshots. In all likelihood, those tests would still be selected indirectly, because the tag selects its parent. This is a subtle distinction, and it may change in future versions of dbt. @@ -350,5 +356,5 @@ models: ```bash - $ dbt test --select tag:my_test_tag + dbt test --select tag:my_test_tag ``` diff --git a/website/docs/reference/resource-configs/access.md b/website/docs/reference/resource-configs/access.md new file mode 100644 index 00000000000..da50e48d2f0 --- /dev/null +++ b/website/docs/reference/resource-configs/access.md @@ -0,0 +1,97 @@ +--- +resource_types: [models] +datatype: access +--- + + + +```yml +version: 2 + +models: + - name: model_name + access: private | protected | public +``` + + + + + +Access modifiers may be applied to models one-by-one in YAML properties. In v1.5 and v1.6, you are unable to configure `access` for multiple models at once. Upgrade to v1.7 for additional configuration options. A group or subfolder contains models with varying access levels, so when you designate a model with `access: public`, make sure you intend for this behavior. + + + + + +You can apply access modifiers in config files, including `the dbt_project.yml`, or to models one-by-one in YAML properties. Applying access configs to a subfolder modifies the default for all models in that subfolder, so make sure you intend for this behavior. When setting individual model access, a group or subfolder might contain a variety of access levels, so when you designate a model with `access: public` make sure you intend for this behavior. + +There are multiple approaches to configuring access: + +In the model configs of `dbt_project.yml``: + +```yaml +models: + - name: my_public_model + access: public # Older method, still supported + +``` +Or (but not both) + +```yaml +models: + - name: my_public_model + config: + access: public # newly supported in v1.7 + +``` + +In a subfolder: +```yaml +models: + my_project_name: + subfolder_name: + +group: + +access: private # sets default for all models in this subfolder +``` + +In the model.sql file: + +```sql +-- models/my_public_model.sql + +{{ config(access = "public") }} + +select ... +``` + + + +## Definition +The access level of the model you are declaring properties for. + +Some models (not all) are designed to be referenced through the [ref](/reference/dbt-jinja-functions/ref) function across [groups](/docs/build/groups). + +| Access | Referenceable by | +|-----------|-------------------------------| +| private | same group | +| protected | same project/package | +| public | any group, package or project | + +If you try to reference a model outside of its supported access, you will see an error: + +```shell +dbt run -s marketing_model +... +dbt.exceptions.DbtReferenceError: Parsing Error + Node model.jaffle_shop.marketing_model attempted to reference node model.jaffle_shop.finance_model, + which is not allowed because the referenced node is private to the finance group. +``` + +## Default + +By default, all models are "protected." This means that other models in the same project can reference them. + +## Related docs + +* [Model Access](/docs/collaborate/govern/model-access#groups) +* [Group configuration](/reference/resource-configs/group) diff --git a/website/docs/reference/resource-configs/contract.md b/website/docs/reference/resource-configs/contract.md index e8ea6d82287..59cc511890b 100644 --- a/website/docs/reference/resource-configs/contract.md +++ b/website/docs/reference/resource-configs/contract.md @@ -23,11 +23,34 @@ When the `contract` configuration is enforced, dbt will ensure that your model's This is to ensure that the people querying your model downstream—both inside and outside dbt—have a predictable and consistent set of columns to use in their analyses. Even a subtle change in data type, such as from `boolean` (`true`/`false`) to `integer` (`0`/`1`), could cause queries to fail in surprising ways. + + The `data_type` defined in your YAML file must match a data type your data platform recognizes. dbt does not do any type aliasing itself. If your data platform recognizes both `int` and `integer` as corresponding to the same type, then they will return a match. -When dbt is comparing data types, it will not compare granular details such as size, precision, or scale. We don't think you should sweat the difference between `varchar(256)` and `varchar(257)`, because it doesn't really affect the experience of downstream queriers. If you need a more-precise assertion, it's always possible to accomplish by [writing or using a custom test](/guides/best-practices/writing-custom-generic-tests). + + + + +dbt uses built-in type aliasing for the `data_type` defined in your YAML. For example, you can specify `string` in your contract, and on Postgres/Redshift, dbt will convert it to `text`. If dbt doesn't recognize the `data_type` name among its known aliases, it will pass it through as-is. This is enabled by default, but you can opt-out by setting `alias_types` to `false`. + +Example for disabling: + +```yml + +models: + - name: my_model + config: + contract: + enforced: true + alias_types: false # true by default + +``` + + + +When dbt compares data types, it will not compare granular details such as size, precision, or scale. We don't think you should sweat the difference between `varchar(256)` and `varchar(257)`, because it doesn't really affect the experience of downstream queriers. You can accomplish a more-precise assertion by [writing or using a custom test](/guides/best-practices/writing-custom-generic-tests). -That said, on certain data platforms, you will need to specify a varchar size or numeric scale if you do not want it to revert to the default. This is most relevant for the `numeric` type on Snowflake, which defaults to a precision of 38 and a scale of 0 (zero digits after the decimal, such as rounded to an integer). To avoid this implicit coercion, specify your `data_type` with a nonzero scale, like `numeric(38, 6)`. +Note that you need to specify a varchar size or numeric scale, otherwise dbt relies on default values. For example, if a `numeric` type defaults to a precision of 38 and a scale of 0, then the numeric column stores 0 digits to the right of the decimal (it only stores whole numbers), which might cause it to fail contract enforcement. To avoid this implicit coercion, specify your `data_type` with a nonzero scale, like `numeric(38, 6)`. dbt Core 1.7 and higher provides a warning if you don't specify precision and scale when providing a numeric data type. ## Example @@ -47,6 +70,8 @@ models: - type: not_null - name: customer_name data_type: string + - name: non_integer + data_type: numeric(38,3) ``` diff --git a/website/docs/reference/resource-configs/delimiter.md b/website/docs/reference/resource-configs/delimiter.md new file mode 100644 index 00000000000..58d6ba8344a --- /dev/null +++ b/website/docs/reference/resource-configs/delimiter.md @@ -0,0 +1,126 @@ +--- +resource_types: [seeds] +datatype: +default_value: "," +--- + +## Definition + +You can use this optional seed configuration to customize how you separate values in a [seed](/docs/build/seeds) by providing the one-character string. + +* The delimiter defaults to a comma when not specified. +* Explicitly set the `delimiter` configuration value if you want seed files to use a different delimiter, such as "|" or ";". + +:::info New in 1.7! + +Delimiter is new functionality available beginning with dbt Core v1.7. + +::: + + +## Usage + +Specify a delimiter in your `dbt_project.yml` file to customize the global separator for all seed values: + + + +```yml +seeds: + : + +delimiter: "|" # default project delimiter for seeds will be "|" + : + +delimiter: "," # delimiter for seeds in seed_subdirectory will be "," +``` + + + + +Or use a custom delimiter to override the values for a specific seed: + + + +```yml +version: 2 + +seeds: + - name: + config: + delimiter: "|" +``` + + + +## Examples +For a project with: + +* `name: jaffle_shop` in the `dbt_project.yml` file +* `seed-paths: ["seeds"]` in the `dbt_project.yml` file + +### Use a custom delimiter to override global values + +You can set a default behavior for all seeds with an exception for one seed, `seed_a`, which uses a comma: + + + +```yml +seeds: + jaffle_shop: + +delimiter: "|" # default delimiter for seeds in jaffle_shop project will be "|" + seed_a: + +delimiter: "," # delimiter for seed_a will be "," +``` + + + +Your corresponding seed files would be formatted like this: + + + +```text +col_a|col_b|col_c +1|2|3 +4|5|6 +... +``` + + + + + +```text +name,id +luna,1 +doug,2 +... +``` + + + +Or you can configure custom behavior for one seed. The `country_codes` uses the ";" delimiter: + + + +```yml +version: 2 + +seeds: + - name: country_codes + config: + delimiter: ";" +``` + + + +The `country_codes` seed file would be formatted like this: + + + +```text +country_code;country_name +US;United States +CA;Canada +GB;United Kingdom +... +``` + + diff --git a/website/docs/reference/resource-configs/enabled.md b/website/docs/reference/resource-configs/enabled.md index b6d0961ee60..52045503088 100644 --- a/website/docs/reference/resource-configs/enabled.md +++ b/website/docs/reference/resource-configs/enabled.md @@ -15,6 +15,7 @@ default_value: true { label: 'Sources', value: 'sources', }, { label: 'Metrics', value: 'metrics', }, { label: 'Exposures', value: 'exposures', }, + { label: 'Semantic models', value: 'semantic models', }, ] }> @@ -250,10 +251,39 @@ exposures: + + + + +Support for disabling semantic models has been added in dbt Core v1.7 + + + + + + + +```yml +semantic_models: + - name: semantic_people + model: ref('people') + config: + enabled: false + +``` + + + +The `enabled` configuration can be nested under the `config` key. + + + + + ## Definition -An optional configuration for disabling models, seeds, snapshots, and tests. +An optional configuration for disabling models, seeds, snapshots, tests, and semantic models. * Default: true diff --git a/website/docs/reference/resource-configs/group.md b/website/docs/reference/resource-configs/group.md index dd73d99edff..7515d8c5789 100644 --- a/website/docs/reference/resource-configs/group.md +++ b/website/docs/reference/resource-configs/group.md @@ -16,6 +16,7 @@ This functionality is new in v1.5. { label: 'Tests', value: 'tests', }, { label: 'Analyses', value: 'analyses', }, { label: 'Metrics', value: 'metrics', }, + { label: 'Semantic models', value: 'semantic models', }, ] }> @@ -265,6 +266,43 @@ metrics: + + + + +Support for grouping semantic models has been added in dbt Core v1.7. + + + + + + + +```yml +semantic_models: + - name: model_name + group: finance + +``` + + + + + +```yml +semantic_models: + [](resource-path): + +group: finance +``` + + + +The `group` configuration can be nested under the `config` key. + + + + + ## Definition diff --git a/website/docs/reference/resource-configs/meta.md b/website/docs/reference/resource-configs/meta.md index d24c5fbaee1..aeff9ee6226 100644 --- a/website/docs/reference/resource-configs/meta.md +++ b/website/docs/reference/resource-configs/meta.md @@ -14,6 +14,8 @@ default_value: {} { label: 'Tests', value: 'tests', }, { label: 'Analyses', value: 'analyses', }, { label: 'Macros', value: 'macros', }, + { label: 'Exposures', value: 'exposures', }, + { label: 'Semantic Models', value: 'semantic models', }, ] }> @@ -172,6 +174,34 @@ exposures: + + + + +Support for grouping semantic models was added in dbt Core v1.7 + + + + + + + +```yml +semantic_models: + - name: semantic_people + model: ref('people') + config: + meta: {} + +``` +The `meta` configuration can be nusted under the `config` key. + + + + + + + ## Definition @@ -248,3 +278,4 @@ select 1 as id ``` + diff --git a/website/docs/reference/resource-configs/store_failures.md b/website/docs/reference/resource-configs/store_failures.md index 3c965179211..6c71cdb9296 100644 --- a/website/docs/reference/resource-configs/store_failures.md +++ b/website/docs/reference/resource-configs/store_failures.md @@ -3,7 +3,7 @@ resource_types: [tests] datatype: boolean --- -The configured test(s) will store their failures when `dbt test --store-failures` is invoked. +The configured test(s) will store their failures when `dbt test --store-failures` is invoked. If you set this configuration as `false` but [`store_failures_as`](/reference/resource-configs/store_failures_as) is configured, it will be overriden. ## Description Optionally set a test to always or never store its failures in the database. diff --git a/website/docs/reference/resource-configs/store_failures_as.md b/website/docs/reference/resource-configs/store_failures_as.md new file mode 100644 index 00000000000..a9149360089 --- /dev/null +++ b/website/docs/reference/resource-configs/store_failures_as.md @@ -0,0 +1,76 @@ +--- +resource_types: [tests] +id: "store_failures_as" +--- + +For the `test` resource type, `store_failures_as` is an optional config that specifies how test failures should be stored in the database. If [`store_failures`](/reference/resource-configs/store_failures) is also configured, `store_failures_as` takes precedence. + +The three supported values are: + +- `ephemeral` — nothing stored in the database (default) +- `table` — test failures stored as a database table +- `view` — test failures stored as a database view + +You can configure it in all the same places as `store_failures`, including singular tests (.sql files), generic tests (.yml files), and dbt_project.yml. + +### Examples + +#### Singular test + +[Singular test](https://docs.getdbt.com/docs/build/tests#singular-tests) in `tests/singular/check_something.sql` file + +```sql +{{ config(store_failures_as="table") }} + +-- custom singular test +select 1 as id +where 1=0 +``` + +#### Generic test + +[Generic tests](https://docs.getdbt.com/docs/build/tests#generic-tests) in `models/_models.yml` file + +```yaml +models: + - name: my_model + columns: + - name: id + tests: + - not_null: + config: + store_failures_as: view + - unique: + config: + store_failures_as: ephemeral +``` + +#### Project level + +Config in `dbt_project.yml` + +```yaml +name: "my_project" +version: "1.0.0" +config-version: 2 +profile: "sandcastle" + +tests: + my_project: + +store_failures_as: table + my_subfolder_1: + +store_failures_as: view + my_subfolder_2: + +store_failures_as: ephemeral +``` + +### "Clobbering" configs + +As with most other configurations, `store_failures_as` is "clobbered" when applied hierarchically. Whenever a more specific value is available, it will completely replace the less specific value. + +Additional resources: + +- [Test configurations](/reference/test-configs#related-documentation) +- [Test-specific configurations](/reference/test-configs#test-specific-configurations) +- [Configuring directories of models in dbt_project.yml](/reference/model-configs#configuring-directories-of-models-in-dbt_projectyml) +- [Config inheritance](/reference/configs-and-properties#config-inheritance) \ No newline at end of file diff --git a/website/docs/reference/resource-properties/access.md b/website/docs/reference/resource-properties/access.md deleted file mode 100644 index 42b9893ed7f..00000000000 --- a/website/docs/reference/resource-properties/access.md +++ /dev/null @@ -1,53 +0,0 @@ ---- -resource_types: [models] -datatype: access -required: no ---- - -:::info New functionality -This functionality is new in v1.5. -::: - - - -```yml -version: 2 - -models: - - name: model_name - access: private | protected | public -``` - - - -Access modifiers may be applied to models one-by-one in YAML properties. It is not currently possible to configure `access` for multiple models at once. A group or subfolder contains models with a variety of access levels, and designating a model with `access: public` should always be a conscious and intentional choice. - -## Definition -The access level of the model you are declaring properties for. - -Some models (not all) are designed to be referenced through the [ref](/reference/dbt-jinja-functions/ref) function across [groups](/docs/build/groups). - -| Access | Referenceable by | -|-----------|-------------------------------| -| private | same group | -| protected | same project/package | -| public | any group, package or project | - -If you try to reference a model outside of its supported access, you will see an error: - -```shell -dbt run -s marketing_model -... -dbt.exceptions.DbtReferenceError: Parsing Error - Node model.jaffle_shop.marketing_model attempted to reference node model.jaffle_shop.finance_model, - which is not allowed because the referenced node is private to the finance group. -``` - -## Default - -By default, all models are "protected." This means that other models in the same project can reference them. - -## Related docs - -* [Model Access](/docs/collaborate/govern/model-access#groups) -* [Group configuration](/reference/resource-configs/group) diff --git a/website/docs/reference/seed-configs.md b/website/docs/reference/seed-configs.md index d74f414cbfe..429aa9444ae 100644 --- a/website/docs/reference/seed-configs.md +++ b/website/docs/reference/seed-configs.md @@ -23,6 +23,7 @@ seeds: [](/reference/resource-configs/resource-path): [+](/reference/resource-configs/plus-prefix)[quote_columns](/reference/resource-configs/quote_columns): true | false [+](/reference/resource-configs/plus-prefix)[column_types](/reference/resource-configs/column_types): {column_name: datatype} + [+](/reference/resource-configs/plus-prefix)[delimiter](/reference/resource-configs/delimiter): ``` @@ -43,6 +44,7 @@ seeds: config: [quote_columns](/reference/resource-configs/quote_columns): true | false [column_types](/reference/resource-configs/column_types): {column_name: datatype} + [delimiter](/reference/resource-configs/grants): ``` diff --git a/website/docs/terms/materialization.md b/website/docs/terms/materialization.md index fdeaaebfcc8..328076f1483 100644 --- a/website/docs/terms/materialization.md +++ b/website/docs/terms/materialization.md @@ -11,7 +11,7 @@ hoverSnippet: The exact Data Definition Language (DDL) that dbt will use when cr :::important This page could use some love -This term would benefit from additional depth and examples. Have knowledge to contribute? [Create a discussion in the docs.getdbt.com GitHub repository](https://github.com/dbt-labs/docs.getdbt.com/discussions) to begin the process of becoming a glossary contributor! +This term would benefit from additional depth and examples. Have knowledge to contribute? [Create an issue in the docs.getdbt.com repository](https://github.com/dbt-labs/docs.getdbt.com/issues/new/choose) to begin the process of becoming a glossary contributor! ::: The exact Data Definition Language (DDL) that dbt will use when creating the model’s equivalent in a . It's the manner in which the data is represented, and each of those options is defined either canonically (tables, views, incremental), or bespoke. diff --git a/website/docs/terms/table.md b/website/docs/terms/table.md index 69fc2b3e6b6..cbe36ec1315 100644 --- a/website/docs/terms/table.md +++ b/website/docs/terms/table.md @@ -6,7 +6,7 @@ displayText: table hoverSnippet: In simplest terms, a table is the direct storage of data in rows and columns. Think excel sheet with raw values in each of the cells. --- :::important This page could use some love -This term would benefit from additional depth and examples. Have knowledge to contribute? [Create a discussion in the docs.getdbt.com GitHub repository](https://github.com/dbt-labs/docs.getdbt.com/discussions) to begin the process of becoming a glossary contributor! +This term would benefit from additional depth and examples. Have knowledge to contribute? [Create an issue in the docs.getdbt.com repository](https://github.com/dbt-labs/docs.getdbt.com/issues/new/choose) to begin the process of becoming a glossary contributor! ::: In simplest terms, a table is the direct storage of data in rows and columns. Think excel sheet with raw values in each of the cells. diff --git a/website/docs/terms/view.md b/website/docs/terms/view.md index 5d9238256e0..90cd5d1f36f 100644 --- a/website/docs/terms/view.md +++ b/website/docs/terms/view.md @@ -6,7 +6,7 @@ displayText: view hoverSnippet: A view (as opposed to a table) is a defined passthrough SQL query that can be run against a database (or data warehouse). --- :::important This page could use some love -This term would benefit from additional depth and examples. Have knowledge to contribute? [Create a discussion in the docs.getdbt.com GitHub repository](https://github.com/dbt-labs/docs.getdbt.com/discussions) to begin the process of becoming a glossary contributor! +This term would benefit from additional depth and examples. Have knowledge to contribute? [Create an issue in the docs.getdbt.com repository](https://github.com/dbt-labs/docs.getdbt.com/issues/new/choose) to begin the process of becoming a glossary contributor! ::: A view (as opposed to a ) is a defined passthrough SQL query that can be run against a database (or ). A view doesn’t store data, like a table does, but it defines the logic that you need to fetch the underlying data. diff --git a/website/sidebars.js b/website/sidebars.js index 220f32130da..2ab370b76e9 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -90,6 +90,7 @@ const sidebarSettings = { label: "OAuth with data platforms", items: [ "docs/cloud/manage-access/set-up-snowflake-oauth", + "docs/cloud/manage-access/set-up-databricks-oauth", "docs/cloud/manage-access/set-up-bigquery-oauth", ], }, // oauth @@ -352,6 +353,7 @@ const sidebarSettings = { link: { type: "doc", id: "docs/deploy/monitor-jobs" }, items: [ "docs/deploy/run-visibility", + "docs/deploy/retry-jobs", "docs/deploy/job-notifications", "docs/deploy/webhooks", "docs/deploy/artifacts", @@ -641,7 +643,6 @@ const sidebarSettings = { type: "category", label: "General properties", items: [ - "reference/resource-properties/access", "reference/resource-properties/columns", "reference/resource-properties/config", "reference/resource-properties/constraints", @@ -658,6 +659,7 @@ const sidebarSettings = { type: "category", label: "General configs", items: [ + "reference/resource-configs/access", "reference/resource-configs/alias", "reference/resource-configs/database", "reference/resource-configs/enabled", @@ -692,6 +694,7 @@ const sidebarSettings = { "reference/seed-properties", "reference/seed-configs", "reference/resource-configs/column_types", + "reference/resource-configs/delimiter", "reference/resource-configs/quote_columns", ], }, @@ -719,6 +722,7 @@ const sidebarSettings = { "reference/resource-configs/limit", "reference/resource-configs/severity", "reference/resource-configs/store_failures", + "reference/resource-configs/store_failures_as", "reference/resource-configs/where", ], }, diff --git a/website/snippets/_adapters-trusted.md b/website/snippets/_adapters-trusted.md index 10af0218e22..96f62a925eb 100644 --- a/website/snippets/_adapters-trusted.md +++ b/website/snippets/_adapters-trusted.md @@ -2,7 +2,20 @@ + + + + diff --git a/website/snippets/_manifest-versions.md b/website/snippets/_manifest-versions.md new file mode 100644 index 00000000000..c9b3e7af6ec --- /dev/null +++ b/website/snippets/_manifest-versions.md @@ -0,0 +1,11 @@ + +| dbt Core version | Manifest version | +|------------------|---------------------------------------------------------------| +| v1.7 | [v11](https://schemas.getdbt.com/dbt/manifest/v11/index.html) | +| v1.6 | [v10](https://schemas.getdbt.com/dbt/manifest/v10/index.html) | +| v1.5 | [v9](https://schemas.getdbt.com/dbt/manifest/v9/index.html) | +| v1.4 | [v8](https://schemas.getdbt.com/dbt/manifest/v8/index.html) | +| v1.3 | [v7](https://schemas.getdbt.com/dbt/manifest/v7/index.html) | +| v1.2 | [v6](https://schemas.getdbt.com/dbt/manifest/v6/index.html) | +| v1.1 | [v5](https://schemas.getdbt.com/dbt/manifest/v5/index.html) | +| v1.0 | [v4](https://schemas.getdbt.com/dbt/manifest/v4/index.html) | \ No newline at end of file diff --git a/website/snippets/_microsoft-adapters-soon.md b/website/snippets/_microsoft-adapters-soon.md new file mode 100644 index 00000000000..c3f30ef0939 --- /dev/null +++ b/website/snippets/_microsoft-adapters-soon.md @@ -0,0 +1,3 @@ +:::tip Coming soon +dbt Cloud support for the Microsoft Fabric and Azure Synapse Analytics adapters is coming soon! +::: \ No newline at end of file diff --git a/website/snippets/connect-starburst-trino/roles-starburst-enterprise.md b/website/snippets/connect-starburst-trino/roles-starburst-enterprise.md index ba11508f1b4..f832d52be20 100644 --- a/website/snippets/connect-starburst-trino/roles-starburst-enterprise.md +++ b/website/snippets/connect-starburst-trino/roles-starburst-enterprise.md @@ -1,3 +1,6 @@ -[comment: For context, the section title used for this snippet is "Roles in Starburst Enterprise" ]: # +[comment: For context, the section title used for this snippet is "Roles in Starburst Enterprise" ]: # -If connecting to a Starburst Enterprise cluster with built-in access controls enabled, you can't add the role as a suffix to the username, so the default role for the provided username is used instead. +If connecting to a Starburst Enterprise cluster with built-in access controls +enabled, you must specify a role using the format detailed in [Additional +parameters](#additional-parameters). If a role is not specified, the default +role for the provided username is used. \ No newline at end of file diff --git a/website/snippets/core-versions-table.md b/website/snippets/core-versions-table.md index 431e1f08b4c..b08c23c84c5 100644 --- a/website/snippets/core-versions-table.md +++ b/website/snippets/core-versions-table.md @@ -6,7 +6,7 @@ | [**v1.6**](/guides/migration/versions/upgrading-to-v1.6) | Jul 31, 2023 | Active | Jul 30, 2024 | | [**v1.5**](/guides/migration/versions/upgrading-to-v1.5) | Apr 27, 2023 | Critical | Apr 27, 2024 | | [**v1.4**](/guides/migration/versions/upgrading-to-v1.4) | Jan 25, 2023 | Critical | Jan 25, 2024 | -| [**v1.3**](/guides/migration/versions/upgrading-to-v1.3) | Oct 12, 2022 | Critical | Oct 12, 2023 | +| [**v1.3**](/guides/migration/versions/upgrading-to-v1.3) | Oct 12, 2022 | End of Life* ⚠️ | Oct 12, 2023 | | [**v1.2**](/guides/migration/versions/upgrading-to-v1.2) | Jul 26, 2022 | End of Life* ⚠️ | Jul 26, 2023 | | [**v1.1**](/guides/migration/versions/upgrading-to-v1.1) ⚠️ | Apr 28, 2022 | Deprecated ⛔️ | Deprecated ⛔️ | | [**v1.0**](/guides/migration/versions/upgrading-to-v1.0) ⚠️ | Dec 3, 2021 | Deprecated ⛔️ | Deprecated ⛔️ | diff --git a/website/snippets/quickstarts/schedule-a-job.md b/website/snippets/quickstarts/schedule-a-job.md index 59d428bdfaa..ab8f4350dbf 100644 --- a/website/snippets/quickstarts/schedule-a-job.md +++ b/website/snippets/quickstarts/schedule-a-job.md @@ -24,15 +24,16 @@ Jobs are a set of dbt commands that you want to run on a schedule. For example, As the `jaffle_shop` business gains more customers, and those customers create more orders, you will see more records added to your source data. Because you materialized the `customers` model as a table, you'll need to periodically rebuild your table to ensure that the data stays up-to-date. This update will happen when you run a job. -1. After creating your deployment environment, you should be directed to the page for new environment. If not, select **Deploy** in the upper left, then click **Jobs**. -2. Click **Create one** and provide a name, for example "Production run", and link to the Environment you just created. -3. Scroll down to "Execution Settings" and select **Generate docs on run**. -4. Under "Commands," add this command as part of your job if you don't see them: - * `dbt build` -5. For this exercise, do _not_ set a schedule for your project to run — while your organization's project should run regularly, there's no need to run this example project on a schedule. Scheduling a job is sometimes referred to as _deploying a project_. -6. Select **Save**, then click **Run now** to run your job. -7. Click the run and watch its progress under "Run history." -8. Once the run is complete, click **View Documentation** to see the docs for your project. +1. After creating your deployment environment, you should be directed to the page for a new environment. If not, select **Deploy** in the upper left, then click **Jobs**. +2. Click **Create one** and provide a name, for example, "Production run", and link to the Environment you just created. +3. Scroll down to the **Execution Settings** section. +4. Under **Commands**, add this command as part of your job if you don't see it: + * `dbt build` +5. Select the **Generate docs on run** checkbox to automatically [generate updated project docs](/docs/collaborate/build-and-view-your-docs) each time your job runs. +6. For this exercise, do _not_ set a schedule for your project to run — while your organization's project should run regularly, there's no need to run this example project on a schedule. Scheduling a job is sometimes referred to as _deploying a project_. +7. Select **Save**, then click **Run now** to run your job. +8. Click the run and watch its progress under "Run history." +9. Once the run is complete, click **View Documentation** to see the docs for your project. :::tip Congratulations 🎉! You've just deployed your first dbt project! diff --git a/website/src/css/custom.css b/website/src/css/custom.css index 3181738406d..972b86a48ca 100644 --- a/website/src/css/custom.css +++ b/website/src/css/custom.css @@ -58,7 +58,7 @@ --pagination-icon-prev: "\2190"; --filter-brightness-low: 1.1; --filter-brightness-high: 1.5; - + --darkmode-link-color: #1FA4A3; --light-dark-toggle: "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMTYiIGhlaWdodD0iMTYiIGZpbGw9Im5vbmUiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PHBhdGggZD0iTTQuMzA4IDMuMzg1YzAtMS4xNzguMTczLTIuMzcuNjE1LTMuMzg1QzEuOTgzIDEuMjggMCA0LjI4MiAwIDcuNjkyQTguMzA4IDguMzA4IDAgMCAwIDguMzA4IDE2YzMuNDEgMCA2LjQxMi0xLjk4MyA3LjY5Mi00LjkyMy0xLjAxNS40NDItMi4yMDcuNjE1LTMuMzg1LjYxNWE4LjMwOCA4LjMwOCAwIDAgMS04LjMwNy04LjMwN1oiIGZpbGw9IiM5MkEwQjMiLz48L3N2Zz4="; /* search overrides */ @@ -104,10 +104,10 @@ html[data-theme="dark"] { /* Linked `code` tags visibility adjustment */ html[data-theme=dark] a code { - color: var(--ifm-link-color); + color: var(--darkmode-link-color); } html[data-theme=dark] a code:hover { - color: var(--ifm-link-hover-color);; + color: var(--darkmode-link-color); } /* For /dbt-cloud/api REDOC Page */ @@ -122,11 +122,11 @@ html[data-theme="dark"] .api-content h1 { html[data-theme="dark"] .api-content button, html[data-theme="dark"] .api-content a { - filter: brightness(1.25); + filter: brightness(var(--filter-brightness-low)); } html[data-theme="dark"] .api-content a:hover { - filter: brightness(1.25); + filter: brightness(var(--filter-brightness-low)); } .redoc-wrap .api-content a, @@ -165,8 +165,19 @@ table td { vertical-align: top; } +html[data-theme=dark] main .row .col:first-of-type a:not(.button) { + color: var(--darkmode-link-color); +} + +html[data-theme="dark"] main .row .col:first-of-type a:hover { + filter: brightness(var(--filter-brightness-low)); +} + +html[data-theme="dark"] main .row .col:first-of-type a article * { + color: white; +} + html[data-theme="dark"] table td { - filter: brightness(1.5); color: white; } @@ -668,6 +679,14 @@ i.theme-doc-sidebar-item-category.theme-doc-sidebar-item-category-level-2.menu__ color: var(--ifm-color-gray-900); } +.alert--secondary, +.alert--secondary a, +.alert--secondary svg { + --ifm-alert-background-color: #474748; + color: white !important; + fill: white !important; +} + html[data-theme="dark"] .alert * { --ifm-alert-foreground-color: var(--ifm-color-gray-900); } @@ -683,7 +702,7 @@ html[data-theme="dark"] .alert table { .alert--success a, .alert--danger a, .alert--warning a { - color: var(--ifm-color-gray-900); + color: var(--ifm-color-gray-900) !important; } .linkout { @@ -842,6 +861,14 @@ div .toggle_src-components-faqs-styles-module { gap: 1em; } +html[data-theme="dark"] .pagination-nav a { + color: var(--darkmode-link-color); +} + +html[data-theme="dark"] .pagination-nav a:hover { + filter: brightness(var(--filter-brightness-low)); +} + .pagination-nav__link { padding: 1rem 0; transition: 100ms all ease-in-out; @@ -948,8 +975,13 @@ html[data-theme="dark"] .blog-breadcrumbs a[href="#"] { filter: brightness(var(--filter-brightness-low)); } -html[data-theme="dark"] .blog-breadcrumbs a:not(:last-of-type):after { - color: var(--ifm-link-color); +html[data-theme="dark"] .blog-breadcrumbs a:hover { + filter: brightness(var(--filter-brightness-low)); +} + +html[data-theme="dark"] .blog-breadcrumbs a:not(:last-of-type):after, +html[data-theme="dark"] .blog-breadcrumbs a { + color: var(--darkmode-link-color); } html[data-theme="dark"] .breadcrumbs__item--active .breadcrumbs__link { @@ -993,6 +1025,21 @@ article[itemprop="blogPost"] h2 { font-size: 2rem; } +html[data-theme="dark"] article[itemprop="blogPost"] a { + color: var(--darkmode-link-color); +} + +html[data-theme="dark"] article[itemprop="blogPost"] a:hover { + filter: brightness(var(--filter-brightness-low)); +} + +/* Sidebar Nav */ +html[data-theme="dark"] .main-wrapper nav a:hover, +html[data-theme="dark"] .main-wrapper nav a:active { + color: var(--darkmode-link-color) !important; + filter: brightness(var(--filter-brightness-low)); +} + /* footer styles */ .footer { font-weight: var(--ifm-font-weight-narrow); @@ -1053,7 +1100,7 @@ article[itemprop="blogPost"] h2 { /* copyright */ .footer__bottom { text-align: left; - color: var(--color-footer-accent); + color: white; font-size: 0.875rem; } diff --git a/website/static/img/Filtering.png b/website/static/img/Filtering.png index 5a15a59f23e..b05394bd459 100644 Binary files a/website/static/img/Filtering.png and b/website/static/img/Filtering.png differ diff --git a/website/static/img/Paginate.png b/website/static/img/Paginate.png index 84a15732c12..21e2fd138b8 100644 Binary files a/website/static/img/Paginate.png and b/website/static/img/Paginate.png differ diff --git a/website/static/img/api-access-profile.jpg b/website/static/img/api-access-profile.jpg new file mode 100644 index 00000000000..36ffd4beda8 Binary files /dev/null and b/website/static/img/api-access-profile.jpg differ diff --git a/website/static/img/api-access-profile.png b/website/static/img/api-access-profile.png deleted file mode 100644 index deade9f2135..00000000000 Binary files a/website/static/img/api-access-profile.png and /dev/null differ diff --git a/website/static/img/dbt-cloud-project-setup-flow-next.png b/website/static/img/dbt-cloud-project-setup-flow-next.png index 660e8ae446a..92f46bccd0a 100644 Binary files a/website/static/img/dbt-cloud-project-setup-flow-next.png and b/website/static/img/dbt-cloud-project-setup-flow-next.png differ diff --git a/website/static/img/delete_projects_from_dbt_cloud_20221023.gif b/website/static/img/delete_projects_from_dbt_cloud_20221023.gif index 246c912c55b..b579556d457 100644 Binary files a/website/static/img/delete_projects_from_dbt_cloud_20221023.gif and b/website/static/img/delete_projects_from_dbt_cloud_20221023.gif differ diff --git a/website/static/img/docs/collaborate/dbt-explorer/cross-project-lineage-child.png b/website/static/img/docs/collaborate/dbt-explorer/cross-project-lineage-child.png new file mode 100644 index 00000000000..666db3384fa Binary files /dev/null and b/website/static/img/docs/collaborate/dbt-explorer/cross-project-lineage-child.png differ diff --git a/website/static/img/docs/collaborate/dbt-explorer/cross-project-lineage-parent.png b/website/static/img/docs/collaborate/dbt-explorer/cross-project-lineage-parent.png new file mode 100644 index 00000000000..ee5d19de369 Binary files /dev/null and b/website/static/img/docs/collaborate/dbt-explorer/cross-project-lineage-parent.png differ diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/dbt-cloud-enterprise/DBX-auth/dbt-databricks-oauth-user.png b/website/static/img/docs/dbt-cloud/using-dbt-cloud/dbt-cloud-enterprise/DBX-auth/dbt-databricks-oauth-user.png new file mode 100644 index 00000000000..aecf99d726a Binary files /dev/null and b/website/static/img/docs/dbt-cloud/using-dbt-cloud/dbt-cloud-enterprise/DBX-auth/dbt-databricks-oauth-user.png differ diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/dbt-cloud-enterprise/DBX-auth/dbt-databricks-oauth.png b/website/static/img/docs/dbt-cloud/using-dbt-cloud/dbt-cloud-enterprise/DBX-auth/dbt-databricks-oauth.png new file mode 100644 index 00000000000..bb32fab2afb Binary files /dev/null and b/website/static/img/docs/dbt-cloud/using-dbt-cloud/dbt-cloud-enterprise/DBX-auth/dbt-databricks-oauth.png differ diff --git a/website/static/img/docs/deploy/native-retry.gif b/website/static/img/docs/deploy/native-retry.gif new file mode 100644 index 00000000000..020a9958fc5 Binary files /dev/null and b/website/static/img/docs/deploy/native-retry.gif differ diff --git a/website/static/img/icons/materialize.svg b/website/static/img/icons/materialize.svg new file mode 100644 index 00000000000..92f693cd94f --- /dev/null +++ b/website/static/img/icons/materialize.svg @@ -0,0 +1,20 @@ + + + + + + + + + + + + + + diff --git a/website/static/img/icons/oracle.svg b/website/static/img/icons/oracle.svg new file mode 100644 index 00000000000..6868dea2eb3 --- /dev/null +++ b/website/static/img/icons/oracle.svg @@ -0,0 +1,47 @@ + + + + + \ No newline at end of file diff --git a/website/static/img/icons/white/materialize.svg b/website/static/img/icons/white/materialize.svg new file mode 100644 index 00000000000..92f693cd94f --- /dev/null +++ b/website/static/img/icons/white/materialize.svg @@ -0,0 +1,20 @@ + + + + + + + + + + + + + + diff --git a/website/static/img/icons/white/oracle.svg b/website/static/img/icons/white/oracle.svg new file mode 100644 index 00000000000..6868dea2eb3 --- /dev/null +++ b/website/static/img/icons/white/oracle.svg @@ -0,0 +1,47 @@ + + + + + \ No newline at end of file diff --git a/website/static/img/node_color_example.png b/website/static/img/node_color_example.png index 83b26f5735a..a1a62742ca0 100644 Binary files a/website/static/img/node_color_example.png and b/website/static/img/node_color_example.png differ diff --git a/website/static/img/sample_email_data.png b/website/static/img/sample_email_data.png deleted file mode 100644 index 7224d42e60b..00000000000 Binary files a/website/static/img/sample_email_data.png and /dev/null differ diff --git a/website/vercel.json b/website/vercel.json index c5fb0638fba..a9187933980 100644 --- a/website/vercel.json +++ b/website/vercel.json @@ -2,6 +2,16 @@ "cleanUrls": true, "trailingSlash": false, "redirects": [ + { + "source": "/faqs/models/reference-models-in-another-project", + "destination": "/docs/collaborate/govern/project-dependencies", + "permanent": true + }, + { + "source": "/faqs/Models/reference-models-in-another-project", + "destination": "/docs/collaborate/govern/project-dependencies", + "permanent": true + }, { "source": "/docs/deploy/job-triggers", "destination": "/docs/deploy/deploy-jobs", @@ -3991,6 +4001,11 @@ "source": "/docs/dbt-cloud/on-premises/upgrading-kots", "destination": "/docs/deploy/single-tenant", "permanent": true + }, + { + "source": "/reference/resource-properties/access", + "destination": "/reference/resource-configs/access", + "permanent": true } ] }