Skip to content

Commit

Permalink
Merge branch 'current' into mirnawong1-patch-26
Browse files Browse the repository at this point in the history
  • Loading branch information
mirnawong1 authored Aug 28, 2024
2 parents 68418da + 5e2b1ac commit 186ea58
Show file tree
Hide file tree
Showing 7 changed files with 28 additions and 4 deletions.
6 changes: 5 additions & 1 deletion website/docs/docs/build/dimensions.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,6 +170,10 @@ Our supported granularities are:
* second
* minute
* hour
* day
* week
* quarter
* year

Aggregation between metrics with different granularities is possible, with the Semantic Layer returning results at the coarsest granularity by default. For example, when querying two metrics with daily and monthly granularity, the resulting aggregation will be at the monthly level.

Expand Down Expand Up @@ -240,7 +244,7 @@ Here’s an example configuration:
- name: tier_start # The name of the dimension.
type: time # The type of dimension (such as time)
label: "Start date of tier" # A readable label for the dimension
expr: start_date # Expression or column name the the dimension represents
expr: start_date # Expression or column name the dimension represents
type_params: # Additional parameters for the dimension type
time_granularity: day # Specifies the granularity of the time dimension (such as day)
validity_params: # Defines the validity window
Expand Down
6 changes: 4 additions & 2 deletions website/docs/docs/core/connect-data-platform/spark-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -208,6 +208,8 @@ Spark can be customized using [Application Properties](https://spark.apache.org/

## Caveats

When facing difficulties, run `poetry run dbt debug --log-level=debug`. The logs are saved at `logs/dbt.log`.

### Usage with EMR
To connect to Apache Spark running on an Amazon EMR cluster, you will need to run `sudo /usr/lib/spark/sbin/start-thriftserver.sh` on the master node of the cluster to start the Thrift server (see [the docs](https://aws.amazon.com/premiumsupport/knowledge-center/jdbc-connection-emr/) for more information). You will also need to connect to port 10001, which will connect to the Spark backend Thrift server; port 10000 will instead connect to a Hive backend, which will not work correctly with dbt.

Expand All @@ -223,6 +225,6 @@ Delta-only features:

### Default namespace with Thrift connection method

If your Spark cluster doesn't have a default namespace, metadata queries that run before any dbt workflow will fail, causing the entire workflow to fail, even if your configurations are correct. The metadata queries fail there's no default namespace in which to run it.
To run metadata queries in dbt, you need to have a namespace named `default` in Spark when connecting with Thrift. You can check available namespaces by using Spark's `pyspark` and running `spark.sql("SHOW NAMESPACES").show()`. If the default namespace doesn't exist, create it by running `spark.sql("CREATE NAMESPACE default").show()`.

To debug, review the debug-level logs to confirm the query dbt is running when it encounters the error: `dbt run --debug` or `logs/dbt.log`.
If there's a network connection issue, your logs will display an error like `Could not connect to any of [('127.0.0.1', 10000)]` (or something similar).
3 changes: 3 additions & 0 deletions website/docs/reference/model-configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ models:
[<resource-path>](/reference/resource-configs/resource-path):
[+](/reference/resource-configs/plus-prefix)[materialized](/reference/resource-configs/materialized): <materialization_name>
[+](/reference/resource-configs/plus-prefix)[sql_header](/reference/resource-configs/sql_header): <string>
[+](/reference/resource-configs/plus-prefix)[on_configuration_change](/reference/resource-configs/on_configuration_change): apply | continue | fail #only for materialized views on supported adapters

```

Expand All @@ -55,6 +56,7 @@ models:
config:
[materialized](/reference/resource-configs/materialized): <materialization_name>
[sql_header](/reference/resource-configs/sql_header): <string>
[on_configuration_change](/reference/resource-configs/on_configuration_change): apply | continue | fail #only for materialized views on supported adapters

```

Expand All @@ -72,6 +74,7 @@ models:
{{ config(
[materialized](/reference/resource-configs/materialized)="<materialization_name>",
[sql_header](/reference/resource-configs/sql_header)="<string>"
[on_configuration_change](/reference/resource-configs/on_configuration_change): apply | continue | fail #only for materialized views for supported adapters
) }}
```
Expand Down
1 change: 1 addition & 0 deletions website/docs/reference/node-selection/defer.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,4 +223,5 @@ dbt will check to see if `dev_alice.model_a` exists. If it doesn't exist, dbt wi
## Related docs

- [Using defer in dbt Cloud](/docs/cloud/about-cloud-develop-defer)
- [on_configuration_change](/reference/resource-configs/on_configuration_change)

3 changes: 3 additions & 0 deletions website/docs/reference/resource-configs/full_refresh.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,3 +85,6 @@ This logic is encoded in the [`should_full_refresh()`](https://github.com/dbt-la

## Recommendation
Set `full_refresh: false` for models of especially large datasets, which you would _never_ want dbt to fully drop and recreate.

## Reference docs
* [on_configuration_change](/reference/resource-configs/on_configuration_change)
11 changes: 11 additions & 0 deletions website/docs/reference/resource-configs/materialized.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,3 +81,14 @@ select ...

You can also configure [custom materializations](/guides/create-new-materializations?step=1) in dbt. Custom materializations are a powerful way to extend dbt's functionality to meet your specific needs.

## Creation Precedence
<!-- This text is copied from /reference/resource-configs/on_configuration_change.md -->
Materializations are implemented following this "drop through" life cycle:

1. If a model does not exist with the provided path, create the new model.
2. If a model exists, but has a different type, drop the existing model and create the new model.
3. If [`--full-refresh`](/reference/resource-configs/full_refresh) is supplied, replace the existing model regardless of configuration changes and the [`on_configuration_change`](/reference/resource-configs/on_configuration_change) setting.
4. If there are no configuration changes, perform the default action for that type (e.g. apply refresh for a materialized view).
5. Determine whether to apply the configuration changes according to the `on_configuration_change` setting.


Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,6 @@ models:
Materializations are implemented following this "drop through" life cycle:
1. If a model does not exist with the provided path, create the new model.
2. If a model exists, but has a different type, drop the existing model and create the new model.
3. If `--full-refresh` is supplied, replace the existing model regardless of configuration changes and the `on_configuration_change` setting.
3. If [`--full-refresh`](/reference/resource-configs/full_refresh) is supplied, replace the existing model regardless of configuration changes and the `on_configuration_change` setting.
4. If there are no configuration changes, perform the default action for that type (e.g. apply refresh for a materialized view).
5. Determine whether to apply the configuration changes according to the `on_configuration_change` setting.

0 comments on commit 186ea58

Please sign in to comment.