Releases: dbt-labs/dbt-spark
Releases · dbt-labs/dbt-spark
dbt-spark 1.3.0b1
dbt-spark 1.2.0
dbt-spark 1.2.0 (July 26, 2022)
Features
Fixes
- Pin
pyodbc
to version 4.0.32 to prevent overwritinglibodbc.so
andlibltdl.so
on Linux (#397, #398) - Incremental materialization updated to not drop table first if full refresh for delta lake format, as it already runs create or replace table (#286, #287)
- Apache Spark version upgraded to 3.1.1 (#348, #349)
adapter.get_columns_in_relation
(method) andget_columns_in_relation
(macro) now return identical responses. The previous behavior ofget_columns_in_relation
(macro) is now represented by a new macro,get_columns_in_relation_raw
(#354, #355)
Under the hood
- Update
SparkColumn.numeric_type
to returndecimal
instead ofnumeric
, since SparkSQL exclusively supports the former (#380) - Initialize lift + shift for cross-db macros (#359)
- Add invocation env to user agent string (#367)
- Use dispatch pattern for get_columns_in_relation_raw macro (#365)
Contributors
- @barberscott (#398)
- @grindheim (#287)
- @nssalian (#349)
- @ueshin (#365)
- @dbeatty10 (#359)
dbt-spark 1.2.0rc1
dbt-spark 1.2.0rc1 (July 12, 2022)
Fixes
- Incremental materialization updated to not drop table first if full refresh for delta lake format, as it already runs create or replace table (#286, #287)
- Apache Spark version upgraded to 3.1.1 (#348, #349)
Features
Under the hood
- Update
SparkColumn.numeric_type
to returndecimal
instead ofnumeric
, since SparkSQL exclusively supports the former (#380)
Contributors
- @grindheim (#287)
- @nssalian (#349)
dbt-spark 1.2.0b1
dbt-spark 1.2.0b1 (June 24, 2022)
Fixes
adapter.get_columns_in_relation
(method) andget_columns_in_relation
(macro) now return identical responses. The previous behavior ofget_columns_in_relation
(macro) is now represented by a new macro,get_columns_in_relation_raw
(#354, #355)
Under the hood
- Add
DBT_INVOCATION_ENV
environment variable to ODBC user agent string (#366) - Initialize lift + shift for cross-db macros (#359)
- Add invocation env to user agent string (#367)
- Use dispatch pattern for get_columns_in_relation_raw macro (#365)
Contributors
- @ueshin (#365)
- @dbeatty10 (#359)
dbt-spark 1.1.0
dbt-spark 1.1.0 (April 28, 2022)
Features
- Add session connection method (#272, #279)
- Adds new integration test to check against new ability to allow unique_key to be a list. (#282), #291)
Fixes
Under the hood
- get_response -> AdapterResponse (#265)
- Adding stale Actions workflow (#275)
- Update plugin author name (
fishtown-analytics
→dbt-labs
) in ODBC user agent (#288) - Configure insert_overwrite models to use parquet (#301)
- Use dbt.tests.adapter.basic in test suite (#298, #299)
- Make internal macros use macro dispatch to be overridable in child adapters (#319, #320)
- Override adapter method 'run_sql_for_tests' (#323, #324)
- when a table or view doesn't exist, 'adapter.get_columns_in_relation' will return empty list instead of fail ([#328]#328)
Contributors
- @amychen1776 (#288)
- @ueshin (#285), (#320)
- @JCZuurmond ( #279)
dbt-spark 1.0.1
dbt-spark 1.1.0rc1
dbt-spark 1.1.0rc1 (April 13 2022)
Features
Under the hood
- Use dbt.tests.adapter.basic in test suite (#298, #299)
- Make internal macros use macro dispatch to be overridable in child adapters (#319, #320)
- Override adapter method 'run_sql_for_tests' (#323, #324)
- when a table or view doesn't exist, 'adapter.get_columns_in_relation' will return empty list instead of fail ([#328]#328)
Contributors
- @JCZuurmond ( #279)
- @ueshin (#320)
dbt-spark 1.0.1rc1
dbt-spark 1.1.0b1
dbt-spark 1.1.0b1 (March 23, 2022)
Features
- Adds new integration test to check against new ability to allow unique_key to be a list. (#282), #291)
Fixes
Under the hood
- get_response -> AdapterResponse (#265)
- Adding stale Actions workflow (#275)
- Update plugin author name (
fishtown-analytics
→dbt-labs
) in ODBC user agent (#288) - Configure insert_overwrite models to use parquet (#301)
Contributors
- @amychen1776 (#288)
- @ueshin (#285)
dbt-spark v1.0.0
Tracking dbt-core v1.0.0.
$ pip install dbt-spark==1.0.0
# or
$ pip install "dbt-spark[ODBC]==1.0.0"
# or
$ pip install "dbt-spark[PyHive]==1.0.0"