Skip to content

Releases: databricks/dbt-databricks

Version 1.2.4

01 Nov 20:06
Compare
Choose a tag to compare

Under the hood

  • Show and log a warning when schema contains '.'. (#221)

Version 1.1.6

01 Nov 20:02
Compare
Choose a tag to compare

Under the hood

  • Show and log a warning when schema contains '.'. (#221)

Version 1.3.0

14 Oct 18:54
Compare
Choose a tag to compare

Features

  • Support python model through run command API, currently supported materializations are table and incremental. (dbt-labs/dbt-spark#377, #126)
  • Enable Pandas and Pandas-on-Spark DataFrames for dbt python models (dbt-labs/dbt-spark#469, #181)
  • Support job cluster in notebook submission method (dbt-labs/dbt-spark#467, #194)
    • In all_purpose_cluster submission method, a config http_path can be specified in Python model config to switch the cluster where Python model runs.
      def model(dbt, _):
          dbt.config(
              materialized='table',
              http_path='...'
          )
          ...
  • Use builtin timestampadd and timestampdiff functions for dateadd/datediff macros if available (#185)
  • Implement testing for a test for various Python models (#189)
  • Implement testing for type_boolean in Databricks (dbt-labs/dbt-spark#471, #188)
  • Add a macro to support COPY INTO (#190)

Under the hood

  • Apply "Initial refactoring of incremental materialization" (#148)
    • Now dbt-databricks uses adapter.get_incremental_strategy_macro instead of dbt_spark_get_incremental_sql macro to dispatch the incremental strategy macro. The overwritten dbt_spark_get_incremental_sql macro will not work anymore.
  • Better interface for python submission (dbt-labs/dbt-spark#452, #178)

Version 1.2.3

26 Sep 18:43
Compare
Choose a tag to compare

Fixes

  • Fix cancellation (#173)
  • http_headers should be dict in the profile (#174)

Version 1.1.5

26 Sep 18:41
Compare
Choose a tag to compare

Fixes

  • Fix cancellation (#173)
  • http_headers should be dict in the profile (#174)

Version 1.2.2

08 Sep 18:38
Compare
Choose a tag to compare

Fixes

  • Data is duplicated on reloading seeds that are using an external table (#114, #149)

Under the hood

  • Explicitly close cursors (#163)
  • Upgrade databricks-sql-connector to 2.0.5 (#166)
  • Embed dbt-databricks and databricks-sql-connector versions to SQL comments (#167)

Version 1.1.4

08 Sep 18:40
Compare
Choose a tag to compare

Fixes

  • Data is duplicated on reloading seeds that are using an external table (#114, #149)

Under the hood

  • Explicitly close cursors (#163)
  • Upgrade databricks-sql-connector to 2.0.5 (#166)
  • Embed dbt-databricks and databricks-sql-connector versions to SQL comments (#167)

Version 1.2.1

24 Aug 23:28
Compare
Choose a tag to compare

Features

  • Support Python 3.10 (#158)

Version 1.1.3

24 Aug 23:19
Compare
Choose a tag to compare

Features

  • Support Python 3.10 (#158)
  • Add connection_parameters for databricks-sql-connector connection parameters (#135)
    • This can be used to customize the connection by setting additional parameters.
    • The full parameters are listed at Databricks SQL Connector for Python.
    • Currently, the following parameters are reserved for dbt-databricks. Please use the normal credential settings instead.
      • server_hostname
      • http_path
      • access_token
      • session_configuration
      • catalog
      • schema

Version 1.1.2

18 Aug 18:53
Compare
Choose a tag to compare

Under the hood

  • Set upper bound for databricks-sql-connector when Python 3.10 (#154)
    • Note that databricks-sql-connector does not officially support Python 3.10 yet.