diff --git a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-9-conclusion.md b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-9-conclusion.md index fa7e2abfaf2..b5490332cd7 100644 --- a/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-9-conclusion.md +++ b/website/docs/best-practices/how-we-build-our-metrics/semantic-layer-9-conclusion.md @@ -28,6 +28,7 @@ pagination_next: null - 🗺️ Use these best practices to map out your team's plan to **incrementally adopt the Semantic Layer**. - 🤗 Get involved in the community and ask questions, **help craft best practices**, and share your progress in building a dbt Semantic Layer. +- [Validate semantic nodes in CI](/docs/deploy/ci-jobs#semantic-validations-in-ci) to ensure code changes made to dbt models don't break these metrics. The dbt Semantic Layer is the biggest paradigm shift thus far in the young practice of analytics engineering. It's ready to provide value right away, but is most impactful if you move your project towards increasing normalization, and allow MetricFlow to do the denormalization for you with maximum dimensionality. diff --git a/website/docs/docs/build/saved-queries.md b/website/docs/docs/build/saved-queries.md index 6261a56ed08..4ef63a637be 100644 --- a/website/docs/docs/build/saved-queries.md +++ b/website/docs/docs/build/saved-queries.md @@ -230,5 +230,5 @@ To include all saved queries in the dbt build run, use the [`--resource-type` fl ## Related docs - +- [Validate semantic nodes in a CI job](/docs/deploy/ci-jobs#semantic-validations-in-ci) - Configure [caching](/docs/use-dbt-semantic-layer/sl-cache) diff --git a/website/docs/docs/build/unit-tests.md b/website/docs/docs/build/unit-tests.md index dcd7e6d282d..3d86b8be0fc 100644 --- a/website/docs/docs/build/unit-tests.md +++ b/website/docs/docs/build/unit-tests.md @@ -51,7 +51,7 @@ You should unit test a model: dbt Labs strongly recommends only running unit tests in development or CI environments. Since the inputs of the unit tests are static, there's no need to use additional compute cycles running them in production. Use them in development for a test-driven approach and CI to ensure changes don't break them. -Use the [resource type](/reference/global-configs/resource-type) flag `--exclude-resource-type` or the `DBT_EXCLUDE_RESOURCE_TYPE` environment variable to exclude unit tests from your production builds and save compute. +Use the [resource type](/reference/global-configs/resource-type) flag `--exclude-resource-type` or the `DBT_EXCLUDE_RESOURCE_TYPES` environment variable to exclude unit tests from your production builds and save compute. ## Unit testing a model diff --git a/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md b/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md index 82c1f277edb..23b6c8ca906 100644 --- a/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md +++ b/website/docs/docs/cloud/about-cloud/regions-ip-addresses.md @@ -14,6 +14,7 @@ dbt Cloud is [hosted](/docs/cloud/about-cloud/architecture) in multiple regions |--------|----------|------------|--------------|----------------|-----------|-----------------| | North America [^1] | AWS us-east-1 (N. Virginia) | **Multi-tenant:** cloud.getdbt.com
**Cell based:** ACCOUNT_PREFIX.us1.dbt.com | 52.45.144.63
54.81.134.249
52.22.161.231
52.3.77.232
3.214.191.130
34.233.79.135 | âś… | âś… | âś… | | EMEA [^1] | AWS eu-central-1 (Frankfurt) | emea.dbt.com | 3.123.45.39
3.126.140.248
3.72.153.148 | ❌ | ❌ | ✅ | +| EMEA [^1] | Azure
North Europe (Ireland) | **Cell based:** ACCOUNT_PREFIX.eu2.dbt.com (beta invite only) | 20.13.190.192/26 | ❌ | ❌ | ✅ | | APAC [^1] | AWS ap-southeast-2 (Sydney)| au.dbt.com | 52.65.89.235
3.106.40.33
13.239.155.206
| ❌ | ❌ | âś… | | Virtual Private dbt or Single tenant | Customized | Customized | Ask [Support](/community/resources/getting-help#dbt-cloud-support) for your IPs | ❌ | ❌ | âś… | diff --git a/website/docs/docs/cloud/manage-access/audit-log.md b/website/docs/docs/cloud/manage-access/audit-log.md index d94e147b501..0abf54ff991 100644 --- a/website/docs/docs/cloud/manage-access/audit-log.md +++ b/website/docs/docs/cloud/manage-access/audit-log.md @@ -173,10 +173,3 @@ You can use the audit log to export all historical audit results for security, c - **For events beyond 90 days** — Select **Export All**. The Account Admin will receive an email link to download a CSV file of all the events that occurred in your organization. - -### Azure Single-tenant - -For users deployed in [Azure single tenant](/docs/cloud/about-cloud/tenancy), while the **Export All** button isn't available, you can conveniently use specific APIs to access all events: - -- [Get recent audit log events CSV](/dbt-cloud/api-v3#/operations/Get%20Recent%20Audit%20Log%20Events%20CSV) — This API returns all events in a single CSV without pagination. -- [List recent audit log events](/dbt-cloud/api-v3#/operations/List%20Recent%20Audit%20Log%20Events) — This API returns a limited number of events at a time, which means you will need to paginate the results. diff --git a/website/docs/docs/collaborate/explore-multiple-projects.md b/website/docs/docs/collaborate/explore-multiple-projects.md index a37d67d058b..125d284a9a5 100644 --- a/website/docs/docs/collaborate/explore-multiple-projects.md +++ b/website/docs/docs/collaborate/explore-multiple-projects.md @@ -4,31 +4,46 @@ sidebar_label: "Explore multiple projects" description: "Learn about project-level lineage in dbt Explorer and its uses." --- -You can also view all the different projects and public models in the account, where the public models are defined, and how they are used to gain a better understanding about your cross-project resources. +View all the projects and public models in your account (where public models are defined) and gain a better understanding of your cross-project resources and how they're used. -The resource-level lineage graph for a given project displays the cross-project relationships in the DAG. The different icons indicate whether you’re looking at an upstream producer project (parent) or a downstream consumer project (child). +The resource-level lineage graph for a project displays the cross-project relationships in the DAG, with a **PRJ** icon indicating whether or not it's a project resource. That icon is located to the left side of the node name. -When you view an upstream (parent) project, its public models display a counter icon in the upper right corner indicating how many downstream (child) projects depend on them. Selecting a model reveals the lineage indicating the projects dependent on that model. These counts include all projects listing the upstream one as a dependency in its `dependencies.yml`, even without a direct `{{ ref() }}`. Selecting a project node from a public model opens its detailed lineage graph, which is subject to your [permission](/docs/cloud/manage-access/enterprise-permissions). +To view the project-level lineage graph, click the **View lineage** icon in the upper right corner from the main overview page: +- This view displays all the projects in your account and their relationships. +- Viewing an upstream (parent) project displays the downstream (child) projects that depend on it. +- Selecting a model reveals its dependent projects in the lineage. +- Click on an upstream (parent) project to view the other projects that reference it in the **Relationships** tab, showing the number of downstream (child) projects that depend on them. + - This includes all projects listing the upstream one as a dependency in its `dependencies.yml` file, even without a direct `{{ ref() }}`. +- Selecting a project node from a public model opens its detailed lineage graph if you have the [permissions](/docs/cloud/manage-access/enterprise-permissions) to do so. - + -When viewing a downstream (child) project that imports and refs public models from upstream (parent) projects, public models will show up in the lineage graph and display an icon on the graph edge that indicates what the relationship is to a model from another project. Hovering over this icon indicates the specific dbt Cloud project that produces that model. Double-clicking on a model from another project opens the resource-level lineage graph of the parent project, which is subject to your permissions. +When viewing a downstream (child) project that imports and refs public models from upstream (parent) projects: +- Public models will show up in the lineage graph and you can click on them to view the model details. +- Clicking on a model opens a side panel containing general information about the model, such as the specific dbt Cloud project that produces that model, description, package, and more. +- Double-clicking on a model from another project opens the resource-level lineage graph of the parent project, if you have the permissions to do so. - - + ## Explore the project-level lineage graph -For cross-project collaboration, you can interact with the DAG in all the same ways as described in [Explore your project's lineage](/docs/collaborate/explore-projects#project-lineage) but you can also interact with it at the project level and view the details. +For cross-project collaboration, you can interact with the DAG in all the same ways as described in [Explore your project's lineage](/docs/collaborate/explore-projects#project-lineage) but you can also interact with it at the project level and view the details. + +If you have permissions for a project in the account, you can view all public models used across the entire account. However, you can only view full public model details and private models if you have permissions for the specific project where those models are defined. -To get a list view of all the projects, select the account name at the top of the **Explore** page near the navigation bar. This view includes a public model list, project list, and a search bar for project searches. You can also view the project-level lineage graph by clicking the Lineage view icon in the page's upper right corner. +To view all the projects in your account (displayed as a lineage graph or list view): +- Navigate to the top left section of the **Explore** page, near the navigation bar. +- Hover over the project name and select the account name. This takes you to a account-level lineage graph page, where you can view all the projects in the account, including dependencies and relationships between different projects. +- Click the **List view** icon in the page's upper right corner to see a list view of all the projects in the account. +- The list view page displays a public model list, project list, and a search bar for project searches. +- Click the **Lineage view** icon in the page's upper right corner to view the account-level lineage graph. -If you have permissions for a project in the account, you can view all public models used across the entire account. However, you can only view full public model details and private models if you have permissions for a project where the models are defined. + -From the project-level lineage graph, you can: +From the account-level lineage graph, you can: -- Click the Lineage view icon (in the graph’s upper right corner) to view the cross-project lineage graph. -- Click the List view icon (in the graph’s upper right corner) to view the project list. +- Click the **Lineage view** icon (in the graph’s upper right corner) to view the cross-project lineage graph. +- Click the **List view** icon (in the graph’s upper right corner) to view the project list. - Select a project from the **Projects** tab to switch to that project’s main **Explore** page. - Select a model from the **Public Models** tab to view the [model’s details page](/docs/collaborate/explore-projects#view-resource-details). - Perform searches on your projects with the search bar. @@ -40,6 +55,6 @@ When you select a project node in the graph, a project details panel opens on th - View a list of its public models, if any. - View a list of other projects that uses the project, if any. - Click **Open Project Lineage** to switch to the project’s lineage graph. -- Click the Share icon to copy the project panel link to your clipboard so you can share the graph with someone. +- Click the **Share** icon to copy the project panel link to your clipboard so you can share the graph with someone. - + diff --git a/website/docs/docs/dbt-cloud-apis/authentication.md b/website/docs/docs/dbt-cloud-apis/authentication.md index 4d7c4d4c06a..8729cc0641d 100644 --- a/website/docs/docs/dbt-cloud-apis/authentication.md +++ b/website/docs/docs/dbt-cloud-apis/authentication.md @@ -33,7 +33,7 @@ pagination_prev: null You should use service tokens broadly for any production workflow where you need a service account. You should use PATs only for developmental workflows _or_ dbt Cloud client workflows that require user context. The following examples show you when to use a personal access token (PAT) or a service token: -* **Connecting a partner integration to dbt Cloud** — Some examples include Hightouch, Datafold, a custom app you’ve created, etc. These types of integrations should use a service token instead of a PAT because service tokens give you visibility, and you can scope them to only what the integration needs and ensure the least privilege. We highly recommend switching to a service token if you’re using a user API key for these integrations today. +* **Connecting a partner integration to dbt Cloud** — Some examples include the [dbt Semantic Layer Google Sheets integration](/docs/cloud-integrations/avail-sl-integrations), Hightouch, Datafold, a custom app you’ve created, etc. These types of integrations should use a service token instead of a PAT because service tokens give you visibility, and you can scope them to only what the integration needs and ensure the least privilege. We highly recommend switching to a service token if you’re using a user API key for these integrations today. * **Production Terraform** — Use a service token since this is a production workflow and is acting as a service account and not a user account. -* **Cloud CLI and Semantic Layer Sheets Integration** — Use a PAT since both the dbt Cloud CLI and Semantic Layer Google Sheets integrations work within the context of a user (the user is making the requests and has to operate within the context of their user account). -* **Testing a custom script and staging Terraform or Postman** — We recommend using a PAT as this is a developmental workflow and is scoped to the user making the changes. When you push this script or Terraform into production, use a service token instead. \ No newline at end of file +* **Cloud CLI** — Use a PAT since the dbt Cloud CLI works within the context of a user (the user is making the requests and has to operate within the context of their user account). +* **Testing a custom script and staging Terraform or Postman** — We recommend using a PAT as this is a developmental workflow and is scoped to the user making the changes. When you push this script or Terraform into production, use a service token instead. diff --git a/website/docs/docs/dbt-cloud-apis/service-tokens.md b/website/docs/docs/dbt-cloud-apis/service-tokens.md index 615e8a73536..26b0e58caa2 100644 --- a/website/docs/docs/dbt-cloud-apis/service-tokens.md +++ b/website/docs/docs/dbt-cloud-apis/service-tokens.md @@ -17,7 +17,7 @@ Service account tokens enable you to securely authenticate with the dbt Cloud AP You can use service account tokens for system-level integrations that do not run on behalf of any one user. Assign any permission sets available in dbt Cloud to your service account token, which can vary slightly depending on your plan: * Enterprise plans can apply any permission sets available to service tokens. -* Team plans can apply Account Admin, Member, Job Admin, Read-Only, and Metadata permissions set to service tokens. +* Team plans can apply Account Admin, Member, Job Admin, Read-Only, Metadata, and Semantic Layer permissions set to service tokens. You can assign as many permission sets as needed to one token. For more on permissions sets, see "[Enterprise Permissions](/docs/cloud/manage-access/enterprise-permissions)." diff --git a/website/docs/docs/dbt-versions/release-notes.md b/website/docs/docs/dbt-versions/release-notes.md index 2fc0f49673a..ea3f1646ca8 100644 --- a/website/docs/docs/dbt-versions/release-notes.md +++ b/website/docs/docs/dbt-versions/release-notes.md @@ -18,6 +18,9 @@ Release notes are grouped by month for both multi-tenant and virtual private clo [^*] The official release date for this new format of release notes is May 15th, 2024. Historical release notes for prior dates may not reflect all available features released earlier this year or their tenancy availability. +## July 2024 +- **New**: Introduced Semantic validations in CI pipelines. Automatically test your semantic nodes (metrics, semantic models, and saved queries) during code reviews by adding warehouse validation checks in your CI job using the `dbt sl validate` command. You can also validate modified semantic nodes to guarantee code changes made to dbt models don't break these metrics. Refer to [Semantic validations in CI](/docs/deploy/ci-jobs#semantic-validations-in-ci) to learn about the additional commands and use cases. + ## June 2024 - **New:** Introduced new granularity support for cumulative metrics in MetricFlow. Granularity options for cumulative metrics are slightly different than granularity for other metric types. For other metrics, we use the `date_trunc` function to implement granularity. However, because cumulative metrics are non-additive (values can't be added up), we can't use the `date_trunc` function to change their time grain granularity. diff --git a/website/docs/docs/dbt-versions/versionless-cloud.md b/website/docs/docs/dbt-versions/versionless-cloud.md new file mode 100644 index 00000000000..ae92e0d6a0a --- /dev/null +++ b/website/docs/docs/dbt-versions/versionless-cloud.md @@ -0,0 +1,67 @@ +--- +title: "Upgrade to \"Keep on latest version\" in dbt Cloud" +sidebar_label: "Upgrade to \"Keep on latest version\" " +description: "Learn how to go versionless in dbt Cloud. You never have to perform an upgrade again. Plus, you'll be able to access new features and enhancements as soon as they become available. " +--- + +dbt Cloud is going versionless. Soon, your environments and jobs will always run on the latest version of dbt. + +This will require you to make one final update to your current jobs and environments. When that's done, you'll never have to think about managing, coordinating, or upgrading dbt versions again. + +Move your environments and jobs to "Keep on latest version" to get all the functionality in the latest versions of dbt Core — and more! — along with access to the new features and fixes as soon as they’re released. + +## Tips for upgrading {#upgrade-tips} + +If you regularly develop your dbt project in dbt Cloud and this is your first time trying “Keep on latest version,” dbt Labs recommends that you start in development because it will be the fastest for investigation and iteration. [Override your dbt version in development](/docs/dbt-versions/upgrade-dbt-version-in-cloud#override-dbt-version). Then, launch the IDE or Cloud CLI and do your development work as usual. Everything should work as you expect. + +If you do see something unexpected or surprising, revert back to the previous version and record the differences you observed. [Contact dbt Cloud support](/docs/dbt-support#dbt-cloud-support) with your findings for a more detailed investigation. + +Next, we recommend that you try upgrading your project’s [deployment environment](/docs/dbt-versions/upgrade-dbt-version-in-cloud#environments). If your project has a [staging deployment environment](/docs/deploy/deploy-environments#staging-environment), upgrade and try working with it for a few days before you proceed with upgrading the production environment. + +If your organization has multiple dbt projects, we recommend starting your upgrade with projects that are smaller, newer, or more familiar for your team. That way, if you do encounter any issues, it'll be easier and faster to troubleshoot those before proceeding to upgrade larger or more complex projects. + +## Considerations + +The following is our guidance on some important considerations regarding dbt projects as part of the upgrade. + +To learn more about how dbt Labs deploys stable dbt upgrades in a safe manner to dbt Cloud, we recommend that you read our blog post [How we're making sure you can confidently "Keep on latest version" in dbt Cloud](https://docs.getdbt.com/blog/latest-dbt-stability) for details. + + + +If you're running dbt version 1.5 or older, please know that your version of dbt Core has reached [end-of-life (EOL)](/docs/dbt-versions/core#eol-version-support) and is no longer supported. We strongly recommend that you update to a newer version as soon as reasonably possible. In the coming months, we're planning to automatically migrate jobs and environments on these older, unsupported versions. + + + + + +The legacy dbt Semantic Layer was deprecated in the second half of 2023. We recommend that you refer to the [Legacy dbt Semantic Layer migration guide](/guides/sl-migration?step=1) for more information. + + + + + +When we talk about _latest version_, we’re referring to the underlying runtime for dbt, not the versions of packages you’re installing. Our continuous release for dbt includes testing against several popular dbt packages. This ensures that updates we make to dbt-core, adapters, or anywhere else are compatible with the code in those packages. + +If a new version of a dbt package includes a breaking change (for example, a change to one of the macros in `dbt_utils`), you don’t have to immediately use the new version. In your `packages` configuration (in `dependencies.yml` or `packages.yml`), you can still specify which versions or version ranges of packages you want dbt to install. If you're not already doing so, we strongly recommend [checking `package-lock.yml` into version control](/reference/commands/deps#predictable-package-installs) for predictable package installs in deployment environments and a clear change history whenever you install upgrades. + +If you upgrade to “Keep on latest version” and immediately see something that breaks, please [contact support](/docs/dbt-support#dbt-cloud-support) and, in the meantime, downgrade back to v1.7. + +If you’re already on “Keep on latest version” and you observe a breaking change (like something worked yesterday, but today it isn't working, or works in a surprising/different way), please [contact support](/docs/dbt-support#dbt-cloud-support) immediately. Depending on your contracted support agreement, the dbt Labs team will respond within our SLA time and we would seek to roll back the change and/or roll out a fix (just as we would for any other part of dbt Cloud). This is the same whether or not the root cause of the breaking change is in the project code or in the code of a package. + +If the package you’ve installed relies on _undocumented_ functionality of dbt, it doesn't have the same guarantees as functionality that we’ve documented and tested. However, we will still do our best to avoid breaking them. + + + + + +No. Going forward, “Keep on latest version” will be how all customers are going to access new functionality and ongoing support in dbt Cloud. We believe this is the best way for us to offer a reliable, stable, and secure runtime for dbt with continuous and consistent updates. + +In 2023 (and earlier), customers were expected to manage their own upgrades by selecting dbt Core versions, up to and including dbt Core v1.7, which was released in October 2023. (Way back in 2021, dbt Cloud customers would pick specific _patch releases_ of dbt Core, such as upgrading from `v0.21.0` to `v0.21.1`. We’ve come a long way since then!) + +In 2024, we've changed the way that new dbt functionality is made available for dbt Cloud customers: continuously. Behavior or breaking changes are gated behind opt-in flags. Users don't need to spend valuable time managing their own upgrades. This is called "Keep on latest version" and it's required for accessing any new functionality that we've put out in 2024+. + +We will absolutely continue to release new minor versions of dbt Core (OSS), including v1.9 which will be available later this year. When we do, it will be a subset of the functionality that's already available to dbt Cloud customers and always after the functionality has been available in dbt Cloud. + + + +If you have comments or concerns, we’re happy to help. If you’re an existing dbt Cloud customer, you may reach out to your account team or [contact support](/docs/dbt-support#dbt-cloud-support). \ No newline at end of file diff --git a/website/docs/docs/deploy/ci-jobs.md b/website/docs/docs/deploy/ci-jobs.md index d1cda90119f..a96311a850f 100644 --- a/website/docs/docs/deploy/ci-jobs.md +++ b/website/docs/docs/deploy/ci-jobs.md @@ -10,7 +10,6 @@ You can set up [continuous integration](/docs/deploy/continuous-integration) (CI dbt Labs recommends that you create your CI job in a dedicated dbt Cloud [deployment environment](/docs/deploy/deploy-environments#create-a-deployment-environment) that's connected to a staging database. Having a separate environment dedicated for CI will provide better isolation between your temporary CI schema builds and your production data builds. Additionally, sometimes teams need their CI jobs to be triggered when a PR is made to a branch other than main. If your team maintains a staging branch as part of your release process, having a separate environment will allow you to set a [custom branch](/faqs/environments/custom-branch-settings) and, accordingly, the CI job in that dedicated environment will be triggered only when PRs are made to the specified custom branch. To learn more, refer to [Get started with CI tests](/guides/set-up-ci). - ### Prerequisites - You have a dbt Cloud account. - For the [Concurrent CI checks](/docs/deploy/continuous-integration#concurrent-ci-checks) and [Smart cancellation of stale builds](/docs/deploy/continuous-integration#smart-cancellation) features, your dbt Cloud account must be on the [Team or Enterprise plan](https://www.getdbt.com/pricing/). @@ -77,6 +76,92 @@ If you're not using dbt Cloud’s native Git integration with [GitHub](/docs/cl - `non_native_pull_request_id` (for example, BitBucket) - Provide the `git_sha` or `git_branch` to target the correct commit or branch to run the job against. +## Semantic validations in CI + +Automatically test your semantic nodes (metrics, semantic models, and saved queries) during code reviews by adding warehouse validation checks in your CI job, guaranteeing that any code changes made to dbt models don't break these metrics. + +To do this, add the command `dbt sl validate --select state:modified+` in the CI job. This ensures the validation of modified semantic nodes and their downstream dependencies. + +- Testing semantic nodes in a CI job supports deferral and selection of semantic nodes. +- It allows you to catch issues early in the development process and deliver high-quality data to your end users. +- Semantic validation executes an explain query in the data warehouse for semantic nodes to ensure the generated SQL will execute. +- For semantic nodes and models that aren't downstream of modified models, dbt Cloud defers to the production models + +To learn how to set this up, refer to the following steps: + +1. Navigate to the **Job setting** page and click **Edit**. +2. Add the `dbt sl validate --select state:modified+` command under **Commands** in the **Execution settings** section. The command uses state selection and deferral to run validation on any semantic nodes downstream of model changes. To reduce job times, we recommend only running CI on modified semantic models. +3. Click **Save** to save your changes. + +There are additional commands and use cases described in the [next section](#use-cases), such as validating all semantic nodes, validating specific semantic nodes, and so on. + + + +### Use cases + +Use or combine different selectors or commands to validate semantic nodes in your CI job. Semantic validations in CI supports the following use cases: + + + +To validate semantic nodes that are downstream of a model change, add the two commands in your job **Execution settings** section: + +```bash +dbt build --select state:modified+ +dbt sl validate --select state:modified+ +``` + +- The first command builds the modified models. +- The second command validates the semantic nodes downstream of the modified models. + +Before running semantic validations, dbt Cloud must build the modified models. This process ensures that downstream semantic nodes are validated using the CI schema through the dbt Semantic Layer API. + +For semantic nodes and models that aren't downstream of modified models, dbt Cloud defers to the production models. + + + + + + + +To only validate modified semantic nodes, use the following command (with [state selection](/reference/node-selection/syntax#stateful-selection)): + +```bash +dbt sl validate --select state:modified+ +``` + + + +This will only validate semantic nodes. It will use the defer state set configured in your orchestration job, deferring to your production models. + + + + + +Use the selector syntax to select the _specific_ semantic node(s) you want to validate: + +```bash +dbt sl validate --select metric:revenue +``` + + + +In this example, the CI job will validate the selected `metric:revenue` semantic node. To select multiple semantic nodes, use the selector syntax: `dbt sl validate --select metric:revenue metric:customers`. + +If you don't specify a selector, dbt Cloud will validate all semantic nodes in your project. + + + + + +To validate _all_ semantic nodes in your project, add the following command to defer to your production schema when generating the warehouse validation queries: + + ```bash + dbt sl validate + ``` + + + + ## Troubleshooting diff --git a/website/docs/docs/deploy/continuous-integration.md b/website/docs/docs/deploy/continuous-integration.md index 9d31588c437..bf27f68a863 100644 --- a/website/docs/docs/deploy/continuous-integration.md +++ b/website/docs/docs/deploy/continuous-integration.md @@ -16,9 +16,9 @@ Using CI helps: ## How CI works -When you [set up CI jobs](/docs/deploy/ci-jobs#set-up-ci-jobs), dbt Cloud listens for notification from your Git provider indicating that a new PR has been opened or updated with new commits. When dbt Cloud receives one of these notifications, it enqueues a new run of the CI job. +When you [set up CI jobs](/docs/deploy/ci-jobs#set-up-ci-jobs), dbt Cloud listens for notification from your Git provider indicating that a new PR has been opened or updated with new commits. When dbt Cloud receives one of these notifications, it enqueues a new run of the CI job. -dbt Cloud builds and tests the models affected by the code change in a temporary schema, unique to the PR. This process ensures that the code builds without error and that it matches the expectations as defined by the project's dbt tests. The unique schema name follows the naming convention `dbt_cloud_pr__` (for example, `dbt_cloud_pr_1862_1704`) and can be found in the run details for the given run, as shown in the following image: +dbt Cloud builds and tests models, semantic models, metrics, and saved queries affected by the code change in a temporary schema, unique to the PR. This process ensures that the code builds without error and that it matches the expectations as defined by the project's dbt tests. The unique schema name follows the naming convention `dbt_cloud_pr__` (for example, `dbt_cloud_pr_1862_1704`) and can be found in the run details for the given run, as shown in the following image: diff --git a/website/docs/docs/deploy/deploy-environments.md b/website/docs/docs/deploy/deploy-environments.md index 8e25803cced..50d1b7ac99e 100644 --- a/website/docs/docs/deploy/deploy-environments.md +++ b/website/docs/docs/deploy/deploy-environments.md @@ -39,7 +39,9 @@ In dbt Cloud, each project can have one designated deployment environment, which ### Semantic Layer -For Semantic Layer-eligible customers, the next section of environment settings is the Semantic Layer configurations. [The Semantic Layer setup guide](/docs/use-dbt-semantic-layer/setup-sl) has the most up-to-date setup instructions! +For customers using the dbt Semantic Layer, the next section of environment settings is the Semantic Layer configurations. [The Semantic Layer setup guide](/docs/use-dbt-semantic-layer/setup-sl) has the most up-to-date setup instructions. + +You can also leverage the dbt Job scheduler to [validate your semantic nodes in a CI job](/docs/deploy/ci-jobs#semantic-validations-in-ci) to ensure code changes made to dbt models don't break these metrics. ## Staging environment diff --git a/website/docs/docs/deploy/job-commands.md b/website/docs/docs/deploy/job-commands.md index 8117178b2d6..2ecdc8bcd05 100644 --- a/website/docs/docs/deploy/job-commands.md +++ b/website/docs/docs/deploy/job-commands.md @@ -28,7 +28,6 @@ Every job invocation automatically includes the [`dbt deps`](/reference/commands **Job outcome** — During a job run, the built-in commands are "chained" together. This means if one of the run steps in the chain fails, then the next commands aren't executed, and the entire job fails with an "Error" job status. - ### Checkbox commands @@ -49,9 +48,8 @@ You can add or remove as many dbt commands as necessary for every job. However, Use [selectors](/reference/node-selection/syntax) as a powerful way to select and execute portions of your project in a job run. For example, to run tests for one_specific_model, use the selector: `dbt test --select one_specific_model`. The job will still run if a selector doesn't match any models. ::: - - -**Job outcome** — During a job run, the commands are "chained" together and executed as run steps. If one of the run steps in the chain fails, then the subsequent steps aren't executed, and the job will fail. + +**Job outcome** — During a job run, the commands are "chained" together and executed as run steps. If one of the run steps in the chain fails, then the subsequent steps aren't executed, and the job will fail. In the following example image, the first four run steps are successful. However, if the fifth run step (`dbt run --select state:modified+ --full-refresh --fail-fast`) fails, then the next run steps aren't executed, and the entire job fails. The failed job returns a non-zero [exit code](/reference/exit-codes) and "Error" job status: diff --git a/website/docs/docs/deploy/job-scheduler.md b/website/docs/docs/deploy/job-scheduler.md index 5bb2083d3da..7d45fddc3f6 100644 --- a/website/docs/docs/deploy/job-scheduler.md +++ b/website/docs/docs/deploy/job-scheduler.md @@ -102,6 +102,10 @@ Example of deactivation banner on job's page: +## FAQs + + + ## Related docs - [dbt Cloud architecture](/docs/cloud/about-cloud/architecture#dbt-cloud-features-architecture) - [Job commands](/docs/deploy/job-commands) diff --git a/website/docs/docs/use-dbt-semantic-layer/exports.md b/website/docs/docs/use-dbt-semantic-layer/exports.md index 9d2705a1b88..a563df40ef7 100644 --- a/website/docs/docs/use-dbt-semantic-layer/exports.md +++ b/website/docs/docs/use-dbt-semantic-layer/exports.md @@ -203,5 +203,6 @@ To include all saved queries in the dbt build run, use the [`--resource-type` fl ## Related docs +- [Validate semantic nodes in a CI job](/docs/deploy/ci-jobs#semantic-validations-in-ci) - Configure [caching](/docs/use-dbt-semantic-layer/sl-cache) - [dbt Semantic Layer FAQs](/docs/use-dbt-semantic-layer/sl-faqs) diff --git a/website/docs/docs/use-dbt-semantic-layer/setup-sl.md b/website/docs/docs/use-dbt-semantic-layer/setup-sl.md index 7a42c46f5c5..ae185a0343a 100644 --- a/website/docs/docs/use-dbt-semantic-layer/setup-sl.md +++ b/website/docs/docs/use-dbt-semantic-layer/setup-sl.md @@ -43,10 +43,16 @@ import SlSetUp from '/snippets/_new-sl-setup.md'; 8. You’re done 🎉! The semantic layer should is now enabled for your project. --> +## Next steps + +- Now that you've set up the dbt Semantic Layer, start querying your metrics with the [available integrations](/docs/cloud-integrations/avail-sl-integrations). +- [Optimize querying performance](/docs/use-dbt-semantic-layer/sl-cache) using declarative caching. +- [Validate semantic nodes in CI](/docs/deploy/ci-jobs#semantic-validations-in-ci) to ensure code changes made to dbt models don't break these metrics. +- If you haven't already, learn how to [build you metrics and semantic models](/docs/build/build-metrics-intro) in your development tool of choice. + ## Related docs - [Build your metrics](/docs/build/build-metrics-intro) -- [Available integrations](/docs/cloud-integrations/avail-sl-integrations) - [Semantic Layer APIs](/docs/dbt-cloud-apis/sl-api-overview) - [Get started with the dbt Semantic Layer](/guides/sl-snowflake-qs) - [dbt Semantic Layer FAQs](/docs/use-dbt-semantic-layer/sl-faqs) diff --git a/website/docs/docs/use-dbt-semantic-layer/sl-cache.md b/website/docs/docs/use-dbt-semantic-layer/sl-cache.md index 12f5c176e9e..5f1460b07f5 100644 --- a/website/docs/docs/use-dbt-semantic-layer/sl-cache.md +++ b/website/docs/docs/use-dbt-semantic-layer/sl-cache.md @@ -132,6 +132,8 @@ If an upstream model has data in it that was created after the cache was created You can manually invalidate the cache through the [dbt Semantic Layer APIs](/docs/dbt-cloud-apis/sl-api-overview) using the `InvalidateCacheResult` field. + ## Related docs +- [Validate semantic nodes in CI](/docs/deploy/ci-jobs#semantic-validations-in-ci) - [Saved queries](/docs/build/saved-queries) - [dbt Semantic Layer FAQs](/docs/use-dbt-semantic-layer/sl-faqs) diff --git a/website/docs/docs/use-dbt-semantic-layer/sl-faqs.md b/website/docs/docs/use-dbt-semantic-layer/sl-faqs.md index b1fa516cf61..fb7ed58ba0d 100644 --- a/website/docs/docs/use-dbt-semantic-layer/sl-faqs.md +++ b/website/docs/docs/use-dbt-semantic-layer/sl-faqs.md @@ -226,6 +226,15 @@ Yes, we approach this by specifying a [dimension](/docs/build/dimensions) that a Yes, while [entities](/docs/build/entities) must be defined under “entities,” they can be queried like dimensions in downstream tools. Additionally, if the entity isn't used to perform joins across your semantic models, you may optionally define it as a dimension. + + +Yes! You can validate your semantic nodes (semantic models, metrics, saved queries) in a few ways: + +- [Query and validate you metrics](/docs/build/metricflow-commands) in your development tool before submitting your code changes. +- [Validate semantic nodes in CI](/docs/deploy/ci-jobs#semantic-validations-in-ci) to ensure code changes made to dbt models don't break these metrics. + + + ## Available integrations diff --git a/website/docs/faqs/Troubleshooting/job-memory-limits.md b/website/docs/faqs/Troubleshooting/job-memory-limits.md new file mode 100644 index 00000000000..cfbecf9e25d --- /dev/null +++ b/website/docs/faqs/Troubleshooting/job-memory-limits.md @@ -0,0 +1,26 @@ +--- +title: "I'm receiving a 'This run exceeded your account's run memory limits' error in my failed job" +description: "Use incremental models or optimize queries for job failures due to exceeded memory limits." +sidebar_label: 'Job failures due to exceeded memory limits' +--- + +If you're receiving a `This run exceeded your account's run memory limits` error in your failed job, it means that the job exceeded the [memory limits](/docs/deploy/job-scheduler#job-memory) set for your account. All dbt Cloud accounts have a pod memory of 600Mib and memory limits are on a per run basis. They're typically influenced by the amount of result data that dbt has to ingest and process, which is small but can become bloated unexpectedly by project design choices. + +## Common reasons + +Some common reasons for higher memory usage are: + +- dbt run/build: Macros that capture large result sets from run query may not all be necessary and may be memory inefficient. +- dbt docs generate: Source or model schemas with large numbers of tables (even if those tables aren't all used by dbt) cause the ingest of very large results for catalog queries. + +## Resolution + +Try the following to resolve this: + +1. **Use incremental models**: Try using [incremental models](/docs/build/incremental-models-overview) to reduce the amount of data being processed in each run. Incremental models only process new or updated data, which can help reduce the memory usage of your jobs. +2. **Refactor your data model**: Review your data models to see if there are any opportunities to optimize or refactor them. For example, you can try to reduce the number of columns being selected, use `where` clauses to filter data early in the query or use `limit` clauses to reduce the amount of data being processed. + +If you've tried the earlier suggestions and are still experiencing failed job runs with this error about hitting the memory limits of your account, please [reach out to support](mailto:support@getdbt.com) and we can try increasing your account's memory. We're happy to help! + +## Additional resources +- [Blog post on how we shaved 90 mins off](https://docs.getdbt.com/blog/how-we-shaved-90-minutes-off-model) diff --git a/website/docs/guides/set-up-ci.md b/website/docs/guides/set-up-ci.md index 39f730f669d..3c1ece9451d 100644 --- a/website/docs/guides/set-up-ci.md +++ b/website/docs/guides/set-up-ci.md @@ -54,6 +54,10 @@ In the Execution Settings, your command will be preset to `dbt build --select st To be able to find modified nodes, dbt needs to have something to compare against. dbt Cloud uses the last successful run of any job in your Production environment as its [comparison state](/reference/node-selection/syntax#about-node-selection). As long as you identified your Production environment in Step 2, you won't need to touch this. If you didn't, pick the right environment from the dropdown. +:::info Use CI to test your metrics +If you've [built semantic nodes](/docs/build/build-metrics-intro) in your dbt project, you can [validate them in a CI job](/docs/deploy/ci-jobs#semantic-validations-in-ci) to ensure code changes made to dbt models don't break these metrics. +::: + ### 3. Test your process That's it! There are other steps you can take to be even more confident in your work, such as validating your structure follows best practices and linting your code. For more information, refer to [Get started with Continuous Integration tests](/guides/set-up-ci). @@ -356,4 +360,4 @@ When the Release Manager is ready to cut a new release, they will manually open To test your new flow, create a new branch in the dbt Cloud IDE then add a new file or modify an existing one. Commit it, then create a new Pull Request (not a draft) against your `qa` branch. You'll see the integration tests begin to run. Once they complete, manually create a PR against `main`, and within a few seconds you’ll see the tests run again but this time incorporating all changes from all code that hasn't been merged to main yet. - \ No newline at end of file + diff --git a/website/sidebars.js b/website/sidebars.js index 03e1ce7852c..07f552088d0 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -716,6 +716,7 @@ const sidebarSettings = { link: { type: "doc", id: "docs/dbt-versions/core" }, items: [ "docs/dbt-versions/core", + "docs/dbt-versions/versionless-cloud", "docs/dbt-versions/upgrade-dbt-version-in-cloud", "docs/dbt-versions/product-lifecycles", "docs/dbt-versions/experimental-features", diff --git a/website/src/components/expandable/styles.module.css b/website/src/components/expandable/styles.module.css index 8c37036ad86..fc6f258286b 100644 --- a/website/src/components/expandable/styles.module.css +++ b/website/src/components/expandable/styles.module.css @@ -141,3 +141,8 @@ .expandableContainer { margin-bottom: 5px; /* Adjust this value as needed to create space */ } + +.headerText { + display: flex; + align-items: center; +} \ No newline at end of file diff --git a/website/static/img/docs/collaborate/dbt-explorer/account-level-lineage.gif b/website/static/img/docs/collaborate/dbt-explorer/account-level-lineage.gif new file mode 100644 index 00000000000..af6937f6d9a Binary files /dev/null and b/website/static/img/docs/collaborate/dbt-explorer/account-level-lineage.gif differ diff --git a/website/static/img/docs/collaborate/dbt-explorer/cross-project-child.png b/website/static/img/docs/collaborate/dbt-explorer/cross-project-child.png new file mode 100644 index 00000000000..13461148465 Binary files /dev/null and b/website/static/img/docs/collaborate/dbt-explorer/cross-project-child.png differ diff --git a/website/static/img/docs/collaborate/dbt-explorer/cross-project-lineage-child.png b/website/static/img/docs/collaborate/dbt-explorer/cross-project-lineage-child.png deleted file mode 100644 index aa2f0d06e00..00000000000 Binary files a/website/static/img/docs/collaborate/dbt-explorer/cross-project-lineage-child.png and /dev/null differ diff --git a/website/static/img/docs/collaborate/dbt-explorer/cross-project-lineage-parent.png b/website/static/img/docs/collaborate/dbt-explorer/cross-project-lineage-parent.png index b667e9fa04f..55dfebf6bed 100644 Binary files a/website/static/img/docs/collaborate/dbt-explorer/cross-project-lineage-parent.png and b/website/static/img/docs/collaborate/dbt-explorer/cross-project-lineage-parent.png differ diff --git a/website/static/img/docs/collaborate/dbt-explorer/multi-project-overview.gif b/website/static/img/docs/collaborate/dbt-explorer/multi-project-overview.gif new file mode 100644 index 00000000000..5283fb19414 Binary files /dev/null and b/website/static/img/docs/collaborate/dbt-explorer/multi-project-overview.gif differ diff --git a/website/static/img/docs/dbt-cloud/deployment/ci-dbt-sl-validate-all.jpg b/website/static/img/docs/dbt-cloud/deployment/ci-dbt-sl-validate-all.jpg new file mode 100644 index 00000000000..072cb0fa133 Binary files /dev/null and b/website/static/img/docs/dbt-cloud/deployment/ci-dbt-sl-validate-all.jpg differ diff --git a/website/static/img/docs/dbt-cloud/deployment/ci-dbt-sl-validate-downstream.jpg b/website/static/img/docs/dbt-cloud/deployment/ci-dbt-sl-validate-downstream.jpg new file mode 100644 index 00000000000..633df115971 Binary files /dev/null and b/website/static/img/docs/dbt-cloud/deployment/ci-dbt-sl-validate-downstream.jpg differ diff --git a/website/static/img/docs/dbt-cloud/deployment/ci-dbt-sl-validate-modified.jpg b/website/static/img/docs/dbt-cloud/deployment/ci-dbt-sl-validate-modified.jpg new file mode 100644 index 00000000000..f9be76dbdbb Binary files /dev/null and b/website/static/img/docs/dbt-cloud/deployment/ci-dbt-sl-validate-modified.jpg differ diff --git a/website/static/img/docs/dbt-cloud/deployment/ci-dbt-sl-validate-select.jpg b/website/static/img/docs/dbt-cloud/deployment/ci-dbt-sl-validate-select.jpg new file mode 100644 index 00000000000..896829caf1d Binary files /dev/null and b/website/static/img/docs/dbt-cloud/deployment/ci-dbt-sl-validate-select.jpg differ