Release 1.1.1
What's Changed
This release mainly includes some features and improvements for loading data to StarRocks.
NOTICE
Take note of the some changes when you upgrade the Spark connector to this version. For details, see Upgrade Spark connector.
Features
- The sink supports retrying. #61
- Support to load data to BITMAP and HLL columns. #67
- Support to load ARRAY-type data. #74
- Support to flush according to the number of buffered rows. #78
Improvements
- Remove useless dependency, and make the Spark connector JAR file lightweight. #55 #57
- Replace fastjson with jackson. #58
- Add the missing Apache license header. #60
- Do not package the MySQL JDBC driver in the Spark connector JAR file. #63
- Support to configure timezone parameter and become compatible with Spark Java8 API datetime. #64
- Optimize row-string converter to reduce CPU costs. #68
- The
starrocks.fe.http.url
parameter supports to add a http scheme. #71 - The interface BatchWrite#useCommitCoordinator is implemented to run on DataBricks 13.1 #79
- Add the hint of checking the privileges and parameters in the error log. #81
Bug fixes
- Parse escape characters in the CSV related parameters
column_seperator
androw_delimiter
. #85
Doc
- Refactor the docs. #66
- Add examples of load data to BITMAP and HLL columns. #70
- Add examples of Spark applications written in Python. #72
- Add examples of loading ARRAY-type data. #75
- Add examples for performing partial updates and conditional updates on Primary Key tables. #80
Contributors
scutzou, hellolilyliuyi, banmoy