From 3101ab8eaa44f3a20901d3da1274a4dbe64464fe Mon Sep 17 00:00:00 2001 From: Jonas Thelemann Date: Wed, 21 Aug 2024 17:18:05 +0200 Subject: [PATCH 1/4] docs(spark-setup): improve namespace caveat description A more detailed description would've saved me quite a bit of time at dbt setup. --- website/docs/docs/core/connect-data-platform/spark-setup.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/website/docs/docs/core/connect-data-platform/spark-setup.md b/website/docs/docs/core/connect-data-platform/spark-setup.md index 3b1429c246b..9f24f7a5a7c 100644 --- a/website/docs/docs/core/connect-data-platform/spark-setup.md +++ b/website/docs/docs/core/connect-data-platform/spark-setup.md @@ -208,6 +208,8 @@ Spark can be customized using [Application Properties](https://spark.apache.org/ ## Caveats +When facing difficulties you can run `poetry run dbt debug --log-level=debug`. The logs are persisted at `logs/dbt.log`. + ### Usage with EMR To connect to Apache Spark running on an Amazon EMR cluster, you will need to run `sudo /usr/lib/spark/sbin/start-thriftserver.sh` on the master node of the cluster to start the Thrift server (see [the docs](https://aws.amazon.com/premiumsupport/knowledge-center/jdbc-connection-emr/) for more information). You will also need to connect to port 10001, which will connect to the Spark backend Thrift server; port 10000 will instead connect to a Hive backend, which will not work correctly with dbt. @@ -223,6 +225,6 @@ Delta-only features: ### Default namespace with Thrift connection method -If your Spark cluster doesn't have a default namespace, metadata queries that run before any dbt workflow will fail, causing the entire workflow to fail, even if your configurations are correct. The metadata queries fail there's no default namespace in which to run it. +A namespace named `default` is required to exist in Spark when connecting via Thrift for dbt to run metadata queries in. You can use Spark's `pyspark` and run `spark.sql("SHOW NAMESPACES").show()` to see the available namespaces and create the required namespace by running `spark.sql("CREATE NAMESPACE default").show()`. -To debug, review the debug-level logs to confirm the query dbt is running when it encounters the error: `dbt run --debug` or `logs/dbt.log`. +If there's a network connection issue instead, your logs will contain `Could not connect to any of [('127.0.0.1', 10000)]` (or similar). From fee20779789ccecf0a1f488a302a1173a0efd8fe Mon Sep 17 00:00:00 2001 From: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> Date: Tue, 27 Aug 2024 15:43:51 +0100 Subject: [PATCH 2/4] Update website/docs/docs/core/connect-data-platform/spark-setup.md --- website/docs/docs/core/connect-data-platform/spark-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/spark-setup.md b/website/docs/docs/core/connect-data-platform/spark-setup.md index 9f24f7a5a7c..72f7946d72e 100644 --- a/website/docs/docs/core/connect-data-platform/spark-setup.md +++ b/website/docs/docs/core/connect-data-platform/spark-setup.md @@ -225,6 +225,6 @@ Delta-only features: ### Default namespace with Thrift connection method -A namespace named `default` is required to exist in Spark when connecting via Thrift for dbt to run metadata queries in. You can use Spark's `pyspark` and run `spark.sql("SHOW NAMESPACES").show()` to see the available namespaces and create the required namespace by running `spark.sql("CREATE NAMESPACE default").show()`. +To run metadata queries in dbt, you need to have a namespace named `default` in Spark when connecting with Thrift. You can check available namespaces by using Spark's `pyspark` and running `spark.sql("SHOW NAMESPACES").show()`. If the default namespace doesn't exist, create it by running `spark.sql("CREATE NAMESPACE default").show()`. If there's a network connection issue instead, your logs will contain `Could not connect to any of [('127.0.0.1', 10000)]` (or similar). From 589d9531ac9bf2d4f20d1452dfac162d41138d2d Mon Sep 17 00:00:00 2001 From: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> Date: Tue, 27 Aug 2024 15:44:24 +0100 Subject: [PATCH 3/4] Update website/docs/docs/core/connect-data-platform/spark-setup.md --- website/docs/docs/core/connect-data-platform/spark-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/spark-setup.md b/website/docs/docs/core/connect-data-platform/spark-setup.md index 72f7946d72e..03ce81e89af 100644 --- a/website/docs/docs/core/connect-data-platform/spark-setup.md +++ b/website/docs/docs/core/connect-data-platform/spark-setup.md @@ -227,4 +227,4 @@ Delta-only features: To run metadata queries in dbt, you need to have a namespace named `default` in Spark when connecting with Thrift. You can check available namespaces by using Spark's `pyspark` and running `spark.sql("SHOW NAMESPACES").show()`. If the default namespace doesn't exist, create it by running `spark.sql("CREATE NAMESPACE default").show()`. -If there's a network connection issue instead, your logs will contain `Could not connect to any of [('127.0.0.1', 10000)]` (or similar). +If there's a network connection issue, your logs will dispaly an error like `Could not connect to any of [('127.0.0.1', 10000)]` (or something similar). From a421a2433339ea250711e7140b762e7034479ba6 Mon Sep 17 00:00:00 2001 From: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> Date: Tue, 27 Aug 2024 15:45:02 +0100 Subject: [PATCH 4/4] Update website/docs/docs/core/connect-data-platform/spark-setup.md --- website/docs/docs/core/connect-data-platform/spark-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/spark-setup.md b/website/docs/docs/core/connect-data-platform/spark-setup.md index 03ce81e89af..f02c6eeaf1b 100644 --- a/website/docs/docs/core/connect-data-platform/spark-setup.md +++ b/website/docs/docs/core/connect-data-platform/spark-setup.md @@ -208,7 +208,7 @@ Spark can be customized using [Application Properties](https://spark.apache.org/ ## Caveats -When facing difficulties you can run `poetry run dbt debug --log-level=debug`. The logs are persisted at `logs/dbt.log`. +When facing difficulties, run `poetry run dbt debug --log-level=debug`. The logs are saved at `logs/dbt.log`. ### Usage with EMR To connect to Apache Spark running on an Amazon EMR cluster, you will need to run `sudo /usr/lib/spark/sbin/start-thriftserver.sh` on the master node of the cluster to start the Thrift server (see [the docs](https://aws.amazon.com/premiumsupport/knowledge-center/jdbc-connection-emr/) for more information). You will also need to connect to port 10001, which will connect to the Spark backend Thrift server; port 10000 will instead connect to a Hive backend, which will not work correctly with dbt.