Skip to content

Commit

Permalink
FIx spacing to avoid horizontal scroling (#5180)
Browse files Browse the repository at this point in the history
## What are you changing in this pull request and why?

Reason: When guides have large photos, videos, tables, or code blocks
they would sometimes go off screen, requiring the user to scroll
horizontally. In some of the Guides this was more egregious than others,
but an all around negative for the customer experience.

The following screenshots are cropped but they show the text running
off-screen on the right hand side:

<img width="996" alt="Screenshot 2024-03-27 at 4 10 07 PM"
src="https://github.com/dbt-labs/docs.getdbt.com/assets/60105315/bca49f6a-0786-4d24-8dc3-e67773d3c91f">

<img width="1225" alt="Screenshot 2024-03-27 at 4 10 00 PM"
src="https://github.com/dbt-labs/docs.getdbt.com/assets/60105315/20463a07-1751-4c62-8422-b38252e1f73f">

<img width="987" alt="Screenshot 2024-03-27 at 4 09 38 PM"
src="https://github.com/dbt-labs/docs.getdbt.com/assets/60105315/7c410ac4-df89-4190-8f3b-cc19052a9ebc">

Solution: Adding a max width to each of the guides, regardless of
whether or not they were impacted (future proofing them if iterations
add those elements. The width set to 900px and brings the text to the
edge but gives a little buffer. This also prevents code blocks and other
elements from extending the beyond the border. Users will now scroll
within the text block itself, rather than the whole screen. This
auto-sizes pictures and videos so they fit on the screen.

Limitations:
- Doesn't have an impact on tables. For tables to resize, we're going to
need some custom code for our site (which I believe is in the queue)
- The closing </div> bracket can't be placed before snippets with no
headers (the header is in the snippet) or it causes a glitch where the
text won't appear on the screen for those sections (despite them still
appearing on the menu). This is resolved by moving the closing bracket
above these sections. It only impacted adapter docs that use the same
snippets at the end.


## Checklist

- [x] Review the [Content style
guide](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/content-style-guide.md)
so my content adheres to these guidelines.
- [x] For [docs
versioning](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#about-versioning),
review how to [version a whole
page](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#adding-a-new-version)
and [version a block of
content](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#versioning-blocks-of-content).
- [x] Add a checklist item for anything that needs to happen before this
PR is merged, such as "needs technical review" or "change base branch."
  • Loading branch information
mirnawong1 authored Mar 28, 2024
2 parents 3965bb7 + 0398bf5 commit 54c1f9b
Show file tree
Hide file tree
Showing 35 changed files with 164 additions and 14 deletions.
8 changes: 6 additions & 2 deletions website/docs/guides/adapter-creation.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ level: 'Advanced'
recently_updated: true
---

<div style={{maxWidth: '900px'}}>

## Introduction

Adapters are an essential component of dbt. At their most basic level, they are how dbt connects with the various supported data platforms. At a higher-level, dbt Core adapters strive to give analytics engineers more transferrable skills as well as standardize how analytics projects are structured. Gone are the days where you have to learn a new language or flavor of SQL when you move to a new job that has a different data platform. That is the power of adapters in dbt Core.
Expand Down Expand Up @@ -50,7 +52,6 @@ The outermost layers of a database map roughly to the areas in which the dbt ada
Even amongst ANSI-compliant databases, there are differences in the SQL grammar.
Here are some categories and examples of SQL statements that can be constructed differently:


| Category | Area of differences | Examples |
|----------------------------------------------|--------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Statement syntax | The use of `IF EXISTS` | <li>`IF <TABLE> EXISTS, DROP TABLE`</li><li>`DROP <TABLE> IF EXISTS`</li> |
Expand Down Expand Up @@ -190,7 +191,7 @@ One of the most important choices you will make during the cookiecutter generati

Regardless if you decide to use the cookiecutter template or manually create the plugin, this section will go over each method that is required to be implemented. The following table provides a high-level overview of the classes, methods, and macros you may have to define for your data platform.

| File | Component | <div style={{width:'350px'}}>Purpose</div> |
|File | Component | <div style={{width:'200px'}}>Purpose</div> |
| ---- | ------------- | --------------------------------------------- |
| `./setup.py` | `setup()` function | adapter meta-data (package name, version, author, homepage, etc) |
| `myadapter/dbt/adapters/myadapter/__init__.py` | `AdapterPlugin` | bundle all the information below into a dbt plugin |
Expand All @@ -201,6 +202,7 @@ Regardless if you decide to use the cookiecutter template or manually create the
| `myadapter/dbt/adapters/myadapter/impl.py` | `MyAdapterAdapter` | for changing _how_ dbt performs operations like macros and other needed Python functionality |
| `myadapter/dbt/adapters/myadapter/column.py` | `MyAdapterColumn` | for defining database-specific column such as datatype mappings |


### Editing `setup.py`

Edit the file at `myadapter/setup.py` and fill in the missing information.
Expand Down Expand Up @@ -1352,3 +1354,5 @@ The approval workflow is as follows:
### Getting help for my trusted adapter

Ask your question in #adapter-ecosystem channel of the dbt community Slack.

</div>
4 changes: 4 additions & 0 deletions website/docs/guides/airflow-and-dbt-cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ level: 'Intermediate'
recently_updated: true
---

<div style={{maxWidth: '900px'}}>

## Introduction

Many organization already use [Airflow](https://airflow.apache.org/) to orchestrate their data workflows. dbt Cloud works great with Airflow, letting you execute your dbt code in dbt Cloud while keeping orchestration duties with Airflow. This ensures your project's metadata (important for tools like dbt Explorer) is available and up-to-date, while still enabling you to use Airflow for general tasks such as:
Expand Down Expand Up @@ -244,3 +246,5 @@ Yes, either through [Airflow's email/slack](https://www.astronomer.io/guides/err
### How should I plan my dbt Cloud + Airflow implementation?

Check out [this recording](https://www.youtube.com/watch?v=n7IIThR8hGk) of a dbt meetup for some tips.

</div>
5 changes: 4 additions & 1 deletion website/docs/guides/bigquery-qs.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ tags: ['BigQuery', 'dbt Cloud','Quickstart']
recently_updated: true
---

<div style={{maxWidth: '900px'}}>

## Introduction

In this quickstart guide, you'll learn how to use dbt Cloud with BigQuery. It will show you how to:
Expand Down Expand Up @@ -293,6 +295,8 @@ Later, you can connect your business intelligence (BI) tools to these views and

This time, when you performed a `dbt run`, separate views/tables were created for `stg_customers`, `stg_orders` and `customers`. dbt inferred the order to run these models. Because `customers` depends on `stg_customers` and `stg_orders`, dbt builds `customers` last. You do not need to explicitly define these dependencies.

</div>

#### FAQs {#faq-2}

<FAQ path="Runs/run-one-model" />
Expand All @@ -304,4 +308,3 @@ Later, you can connect your business intelligence (BI) tools to these views and

<Snippet path="quickstarts/schedule-a-job" />


4 changes: 4 additions & 0 deletions website/docs/guides/building-packages.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ level: 'Advanced'
recently_updated: true
---

<div style={{maxWidth: '900px'}}>

## Introduction

Creating packages is an **advanced use of dbt**. If you're new to the tool, we recommend that you first use the product for your own analytics before attempting to create a package for others.
Expand Down Expand Up @@ -169,3 +171,5 @@ The release notes should contain an overview of the changes introduced in the ne
## Add the package to hub.getdbt.com

Our package registry, [hub.getdbt.com](https://hub.getdbt.com/), gets updated by the [hubcap script](https://github.com/dbt-labs/hubcap). To add your package to hub.getdbt.com, create a PR on the [hubcap repository](https://github.com/dbt-labs/hubcap) to include it in the `hub.json` file.

</div>
3 changes: 3 additions & 0 deletions website/docs/guides/codespace-qs.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ hide_table_of_contents: true
tags: ['dbt Core','Quickstart']
---

<div style={{maxWidth: '900px'}}>

## Introduction

In this quickstart guide, you’ll learn how to create a codespace and be able to execute the `dbt build` command from it in _less than 5 minutes_.
Expand Down Expand Up @@ -72,3 +74,4 @@ If you'd like to work with a larger selection of Jaffle Shop data, you can gener

As you increase the number of years, it takes exponentially more time to generate the data because the Jaffle Shop stores grow in size and number. For a good balance of data size and time to build, dbt Labs suggests a maximum of 6 years.

</div>
5 changes: 5 additions & 0 deletions website/docs/guides/core-to-cloud-1.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,9 @@ tags: ['Migration','dbt Core','dbt Cloud']
level: 'Intermediate'
recently_updated: true
---

<div style={{maxWidth: '900px'}}>

## Introduction

Moving from dbt Core to dbt Cloud streamlines analytics engineering workflows by allowing teams to develop, test, deploy, and explore data products using a single, fully managed software service.
Expand Down Expand Up @@ -253,3 +256,5 @@ For next steps, we'll soon share other guides on how to manage your move and tip
- Work with the [dbt Labs’ Professional Services](https://www.getdbt.com/dbt-labs/services) team to support your data organization and migration.

</ConfettiTrigger>

</div>
4 changes: 4 additions & 0 deletions website/docs/guides/create-new-materializations.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ level: 'Advanced'
recently_updated: true
---

<div style={{maxWidth: '900px'}}>

## Introduction

The model <Term id="materialization">materializations</Term> you're familiar with, `table`, `view`, and `incremental` are implemented as macros in a package that's distributed along with dbt. You can check out the [source code for these materializations](https://github.com/dbt-labs/dbt-adapters/tree/60005a0a2bd33b61cb65a591bc1604b1b3fd25d5/dbt/include/global_project/macros/materializations). If you need to create your own materializations, reading these files is a good place to start. Continue reading below for a deep-dive into dbt materializations.
Expand Down Expand Up @@ -184,3 +186,5 @@ In each of the stated search spaces, a materialization can only be defined once.
Specific materializations can be selected by using the dot-notation when selecting a materialization from the context.

We recommend _not_ overriding materialization names directly, and instead using a prefix or suffix to denote that the materialization changes the behavior of the default implementation (eg. my_project_incremental).

</div>
27 changes: 17 additions & 10 deletions website/docs/guides/custom-cicd-pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ tags: ['dbt Cloud', 'Orchestration', 'CI']
level: 'Intermediate'
recently_updated: true
---
<div style={{maxWidth: '900px'}}>

## Introduction

Expand Down Expand Up @@ -71,12 +72,13 @@ When running a CI/CD pipeline you’ll want to use a service token instead of an
- Click the *+Add* button under *Access,* and grant this token the *Job Admin* permission
- Click *Save* and you’ll see a grey box appear with your token. Copy that and save it somewhere safe (this is a password, and should be treated as such).

![View of the dbt Cloud page where service tokens are created](/img/guides/orchestration/custom-cicd-pipelines/dbt-service-token-page.png)
<Lightbox src="/img/guides/orchestration/custom-cicd-pipelines/dbt-service-token-page.png" title="View of the dbt Cloud page where service tokens are created" width="85%" />

Here’s a video showing the steps as well:

<WistiaVideo id="iub17te9ir" />


### 2. Put your dbt Cloud API key into your repo

This next part will happen in you code hosting platform. We need to save your API key from above into a repository secret so the job we create can access it. It is **not** recommended to ever save passwords or API keys in your code, so this step ensures that your key stays secure, but is still usable for your pipelines.
Expand Down Expand Up @@ -107,6 +109,7 @@ This next part will happen in you code hosting platform. We need to save your AP
Here’s a video showing these steps:

<WistiaVideo id="u7mo30puql" />

</TabItem>

<TabItem value="gitlab">
Expand All @@ -120,12 +123,12 @@ Here’s a video showing these steps:
- Make sure the check box next to *Protect variable* is unchecked, and the box next to *Mask variable* is selected (see below)
- “Protected” means that the variable is only available in pipelines that run on protected branches or protected tags - that won’t work for us because we want to run this pipeline on multiple branches. “Masked” means that it will be available to your pipeline runner, but will be masked in the logs.

![View of the GitLab window for entering DBT_API_KEY](/img/guides/orchestration/custom-cicd-pipelines/dbt-api-key-gitlab.png)
<Lightbox src="/img/guides/orchestration/custom-cicd-pipelines/dbt-api-key-gitlab.png" title="[View of the GitLab window for entering DBT_API_KEY" width="80%" />

Here’s a video showing these steps:

<WistiaVideo id="rgqs14f816" />


</TabItem>
<TabItem value="ado">

Expand Down Expand Up @@ -165,6 +168,8 @@ In Bitbucket:
Here’s a video showing these steps:
<WistiaVideo id="1fddpsqpfv" />



</TabItem>
</Tabs>

Expand Down Expand Up @@ -472,30 +477,30 @@ Additionally, you’ll see the job in the run history of dbt Cloud. It should be
}>
<TabItem value="github">

![dbt run on merge job in GitHub](/img/guides/orchestration/custom-cicd-pipelines/dbt-run-on-merge-github.png)
<Lightbox src="/img/guides/orchestration/custom-cicd-pipelines/dbt-run-on-merge-github.png" title="dbt run on merge job in GitHub" width="80%" />

![dbt Cloud job showing it was triggered by GitHub](/img/guides/orchestration/custom-cicd-pipelines/dbt-cloud-job-github-triggered.png)
<Lightbox src="/img/guides/orchestration/custom-cicd-pipelines/dbt-cloud-job-github-triggered.png" title="dbt Cloud job showing it was triggered by GitHub" width="80%" />

</TabItem>
<TabItem value="gitlab">

![dbt run on merge job in GitLab](/img/guides/orchestration/custom-cicd-pipelines/dbt-run-on-merge-gitlab.png)
<Lightbox src="/img/guides/orchestration/custom-cicd-pipelines/dbt-run-on-merge-gitlab.png" title="dbt run on merge job in GitLab" width="80%" />

![dbt Cloud job showing it was triggered by GitLab](/img/guides/orchestration/custom-cicd-pipelines/dbt-cloud-job-gitlab-triggered.png)
<Lightbox src="/img/guides/orchestration/custom-cicd-pipelines/dbt-cloud-job-gitlab-triggered.png" title="dbt Cloud job showing it was triggered by GitLab" width="80%" />

</TabItem>
<TabItem value="ado">

<Lightbox src="/img/guides/orchestration/custom-cicd-pipelines/dbt-run-on-merge-azure.png" width="85%" title="dbt run on merge job in ADO"/>

<Lightbox src="/img/guides/orchestration/custom-cicd-pipelines/dbt-cloud-job-azure-triggered.png" width="100%" title="ADO-triggered job in dbt Cloud"/>
<Lightbox src="/img/guides/orchestration/custom-cicd-pipelines/dbt-cloud-job-azure-triggered.png" width="80" title="ADO-triggered job in dbt Cloud"/>

</TabItem>
<TabItem value="bitbucket">

![dbt run on merge job in Bitbucket](/img/guides/orchestration/custom-cicd-pipelines/dbt-run-on-merge-bitbucket.png)
<Lightbox src="/img/guides/orchestration/custom-cicd-pipelines/dbt-run-on-merge-bitbucket.png)" title="dbt run on merge job in Bitbucket" width="80%" />

![dbt Cloud job showing it was triggered by Bitbucket](/img/guides/orchestration/custom-cicd-pipelines/dbt-cloud-job-bitbucket-triggered.png)
<Lightbox src="/img/guides/orchestration/custom-cicd-pipelines/dbt-cloud-job-bitbucket-triggered.png" title="dbt Cloud job showing it was triggered by Bitbucket" width="80%" />

</TabItem>
</Tabs>
Expand Down Expand Up @@ -636,3 +641,5 @@ This macro goes into a dbt Cloud job that is run on a schedule. The command will
Running dbt Cloud jobs through a CI/CD pipeline is a form of job orchestration. If you also run jobs using dbt Cloud’s built in scheduler, you now have 2 orchestration tools running jobs. The risk with this is that you could run into conflicts - you can imagine a case where you are triggering a pipeline on certain actions and running scheduled jobs in dbt Cloud, you would probably run into job clashes. The more tools you have, the more you have to make sure everything talks to each other.

That being said, if **the only reason you want to use pipelines is for adding a lint check or run on merge**, you might decide the pros outweigh the cons, and as such you want to go with a hybrid approach. Just keep in mind that if two processes try and run the same job at the same time, dbt Cloud will queue the jobs and run one after the other. It’s a balancing act but can be accomplished with diligence to ensure you’re orchestrating jobs in a manner that does not conflict.

</div>
5 changes: 5 additions & 0 deletions website/docs/guides/databricks-qs.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,9 @@ hide_table_of_contents: true
recently_updated: true
tags: ['dbt Cloud', 'Quickstart','Databricks']
---

<div style={{maxWidth: '900px'}}>

## Introduction

In this quickstart guide, you'll learn how to use dbt Cloud with Databricks. It will show you how to:
Expand Down Expand Up @@ -371,6 +374,8 @@ Later, you can connect your business intelligence (BI) tools to these views and

This time, when you performed a `dbt run`, separate views/tables were created for `stg_customers`, `stg_orders` and `customers`. dbt inferred the order to run these models. Because `customers` depends on `stg_customers` and `stg_orders`, dbt builds `customers` last. You do not need to explicitly define these dependencies.

</div>

#### FAQs {#faq-2}

<FAQ path="Runs/run-one-model" />
Expand Down
4 changes: 4 additions & 0 deletions website/docs/guides/dbt-models-on-databricks.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ level: 'Intermediate'
recently_updated: true
---

<div style={{maxWidth: '900px'}}>

## Introduction

Building on the [Set up your dbt project with Databricks](/guides/set-up-your-databricks-dbt-project) guide, we'd like to discuss performance optimization. In this follow-up post, we outline simple strategies to optimize for cost, performance, and simplicity when you architect data pipelines. We’ve encapsulated these strategies in this acronym-framework:
Expand Down Expand Up @@ -180,3 +182,5 @@ With the [dbt Cloud Admin API](/docs/dbt-cloud-apis/admin-cloud-api), you can 
This builds on the content in [Set up your dbt project with Databricks](/guides/set-up-your-databricks-dbt-project).

We welcome you to try these strategies on our example open source TPC-H implementation and to provide us with thoughts/feedback as you start to incorporate these features into production. Looking forward to your feedback on [#db-databricks-and-spark](https://getdbt.slack.com/archives/CNGCW8HKL) Slack channel!

</div>
4 changes: 4 additions & 0 deletions website/docs/guides/dbt-python-snowpark.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ level: 'Intermediate'
recently_updated: true
---

<div style={{maxWidth: '900px'}}>

## Introduction

The focus of this workshop will be to demonstrate how we can use both *SQL and python together* in the same workflow to run *both analytics and machine learning models* on dbt Cloud.
Expand Down Expand Up @@ -1923,3 +1925,5 @@ Now that we've completed testing and documenting our work, we're ready to deploy
Fantastic! You’ve finished the workshop! We hope you feel empowered in using both SQL and Python in your dbt Cloud workflows with Snowflake. Having a reliable pipeline to surface both analytics and machine learning is crucial to creating tangible business value from your data.
For more help and information join our [dbt community Slack](https://www.getdbt.com/community/) which contains more than 50,000 data practitioners today. We have a dedicated slack channel #db-snowflake to Snowflake related content. Happy dbt'ing!

</div>
4 changes: 4 additions & 0 deletions website/docs/guides/debug-errors.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ level: 'Beginner'
recently_updated: true
---

<div style={{maxWidth: '900px'}}>

## General process of debugging

Learning how to debug is a skill, and one that will make you great at your role!
Expand Down Expand Up @@ -387,3 +389,5 @@ We’ve all been there. dbt uses the last-saved version of a file when you execu
_(More likely for dbt Core users)_
If you just opened a SQL file in the `target/` directory to help debug an issue, it's not uncommon to accidentally edit that file! To avoid this, try changing your code editor settings to grey out any files in the `target/` directory — the visual cue will help avoid the issue.
</div>
4 changes: 4 additions & 0 deletions website/docs/guides/debug-schema-names.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ level: 'Advanced'
recently_updated: true
---

<div style={{maxWidth: '900px'}}>

## Introduction

If a model uses the [`schema` config](/reference/resource-properties/schema) but builds under an unexpected schema, here are some steps for debugging the issue. The full explanation of custom schemas can be found [here](/docs/build/custom-schemas).
Expand Down Expand Up @@ -100,3 +102,5 @@ Now that you understand how a model's schema is being generated, you can adjust
- You can also adjust your `target` details (for example, changing the name of a target)

If you change the logic in `generate_schema_name`, it's important that you consider whether two users will end up writing to the same schema when developing dbt models. This consideration is the reason why the default implementation of the macro concatenates your target schema and custom schema together — we promise we were trying to be helpful by implementing this behavior, but acknowledge that the resulting schema name is unintuitive.

</div>
5 changes: 5 additions & 0 deletions website/docs/guides/dremio-lakehouse.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,9 @@ tags: ['Dremio', 'dbt Core']
level: 'Intermediate'
recently_updated: true
---

<div style={{maxWidth: '900px'}}>

## Introduction

This guide will demonstrate how to build a data lakehouse with dbt Core 1.5 or newer and Dremio Cloud. You can simplify and optimize your data infrastructure with dbt's robust transformation framework and Dremio’s open and easy data lakehouse. The integrated solution empowers companies to establish a strong data and analytics foundation, fostering self-service analytics and enhancing business insights while simplifying operations by eliminating the necessity to write complex Extract, Transform, and Load (ETL) pipelines.
Expand Down Expand Up @@ -194,3 +197,5 @@ GROUP BY vendor_id
<Lightbox src="/img/guides/dremio/dremio-test-results.png" width="70%" title="Sample output from SQL query"/>

This completes the integration setup and data is ready for business consumption.

</div>
Loading

0 comments on commit 54c1f9b

Please sign in to comment.