Skip to content

Releases: lakesoul-io/LakeSoul

LakeSoul Release v2.3.0

13 Jul 09:44
Compare
Choose a tag to compare

v2.3.0 Release Notes

This is the first release after LakeSoul donated to Linux Foundation AI & Data. This release contains the following major new features:

  1. Flink Connector for Flink SQL/Table API to read or write LakeSoul in both batch and streaming mode, with the supports of Flink Changelog Stream semantics and row-level upsert and delete. See docs Flink Connector.
  2. Flink CDC Ingestion refactored to infer new tables and schema changes automatically from messages. This enables simpler CDC stream ingestion job development for any kinds of database or message queues.
  3. Global automatic compaction service. See docs Auto Compaction Service.

更新日志

这是 LakeSoul 捐赠给 Linux Foundation AI & Data 后的第一个发布版本。该版本包含以下重要更新:

  1. 全面支持 Flink SQL/Table API. LakeSoul 支持 Flink 流、批读写。流式读写完整支持 Flink Changelog 语义,支持行级别流式增删改。参考文档
  2. Flink CDC 整库同步重构,支持从消息中自动推断新表和 schema 变更。能够更简单的开发 CDC 入湖作业并支持消费任意数据库 CDC 流或消息队列流。
  3. 全局自动 Compaction 服务。参考文档:LakeSoul 全局自动压缩服务使用方法

What's Changed

v2.2.0

31 Mar 08:33
Compare
Choose a tag to compare

LakeSoul Release v2.2.0

v2.2.0 Release Notes

  1. Native IO is by default enabled for Flink CDC Sink and Spark SQL. Native IO uses arrow-rs and Datafusion with special IO optimizations based on arrow-rs' object store. Benchmarks show 3x IO throughput improvement over parquet-mr and Hadoop filesystem. Native IO supports both HDFS and S3 object storage (including S3 protocol compatible storages). Native IO supports all data types in Spark and Flink and has passed both TPC-H and CHBenchmark correctness tests.
  2. Snapshot read and incremental read support on Spark. LakeSoul's incremental read on spark supports both batch mode and microbatch streaming mode.
  3. Default supported Spark's version has been upgraded to Spark 3.3.

v2.2.0 发布日志

  1. Native IO 在 Flink 和 Spark 上默认启用。Native IO 使用 arrow-rs 和 [Datafusion] (https://github.com/apache/arrow-datafusion) 实现,并在 arrow-rs object store 上做了专门的性能优化。在实际测试中比 parquet-mr+hadoop filesystem 快 3 倍以上。Native IO 可以支持 HDFS 和 S3 存储,以及与 S3 兼容的存储系统。Native IO 经过了详细的测试,能够支持 Flink、Spark 所有数据类型,并通过了 TPC-H 和 CHBenchmark 的正确性校验。
  2. 在 Spark 上支持了快照读增量读功能。增量读功能可以支持 batch 模式和 micro batch streaming 模式。
  3. 默认的 Spark 版本更新到 3.3.

What's Changed

Full Changelog: https://github.com/meta-soul/LakeSoul/commits/v2.2.0

v2.1.1

18 Oct 05:45
Compare
Choose a tag to compare

What's Changed

This is a bug fix release for v2.1.0.

Fixed bugs:

Full Changelog: 2.1.0...v2.1.1

v2.1.0

12 Oct 09:45
Compare
Choose a tag to compare

v2.1.0 Release Notes

LakeSoul 2.1.0 brings new Flink CDC sink implementation which supports all tables (with different schemas) in one entire MySQL database sync in one Flink job, automatic schema sync and evolution, automatic new table creation and exactly once guarantee. The currently supported flink version is 1.14.

In 2.1.0 we also reimplement Spark catalog so that it could be used as a standalone catalog rather than a session catalog extension. This change is to avoid some inconsistencies in Spark's v2 table commands, e.g. show tables cannot support v2 tables until 3.3.

Packages for Spark and Flink are separated into two maven submodules. The maven coordinates are com.dmetasoul:lakesoul-spark:2.1.0-spark-3.1.2 and com.dmeatsoul:lakesoul-flink:2.1.0-flink-1.14. All the required transitive dependencies have already been shaded into the released jars.

v2.1.0 发布日志

LakeSoul 2.1.0 增加了全新的 Flink CDC Sink 功能,支持 MySQL 数据库整库千表(支持不同 schema)同步,自动 Schema 变更同步,自动新表感知和严格一次(Exactly Once)语义保证。

Spark 支持部分重写了 Catalog 的实现,使得 Catalog 可以作为非 Session Catalog 扩展使用,主要目的是规避 Spark 在 3.3 版本之前,一些 DDL Command 不支持 V2 表的问题。

Spark 和 Flink 分别拆分成了两个 Maven 子模块。在工程中引用的 Maven 坐标分别是 com.dmetasoul:lakesoul-spark:2.1.0-spark-3.1.2 and com.dmeatsoul:lakesoul-flink:2.1.0-flink-1.14。他们各自的依赖库已经通过 shade 的方式打包到了发布的 jar 包中。

Merged Pull Requests

New Contributors

Full Changelog: https://github.com/meta-soul/LakeSoul/commits/2.1.0

v2.0.1-spark-3.1.2

08 Jul 07:56
Compare
Choose a tag to compare

What's Changed

v2.0.0-spark-3.1.2

01 Jul 08:38
Compare
Choose a tag to compare

1. Catalog refactoring

  1. Replacing the Cassandra protocol with the Postgres protocol
  2. metadata Use PG protocol to rewrite table operations, partition operations, and data operation related functions, and use transaction mechanism to achieve data submission collision detection to ensure ACID attributes
  3. Interface with Spark and metadata, translate Spark-related metadata operations into the underlying interface, and realize the cross-border distribution between the upper computing platform and the underlying development storage layer

2. DDL

  1. Spark SQL related DDL statements (create alter, etc.) transformation
  2. Spark DataFrame | DataSet related DDL statement (save, etc.) transformation

3. Data Writing

  1. Transformation of SparkSQL-related DML statements (insert into, update, etc.)
  2. Spark DataFrame | DataSet related DML statements (write function, etc.)
  3. LakeSoulTable upsert function transformation
  4. LakeSoulTable compaction function transformation, and support to mount to hive

4. Data Reading

  1. A variety of ParquetScan transformation, remove the write version sorting mechanism, adapt to the new metadata UUID file list format
  2. LakeSoulTable adds a snapshot reading function to read the historical content according to the specified partition version
  3. LakeSoulTable adds a history rollback function to roll back to a certain historical version of the specified partition
  4. Added and modified the default MergeOprator function to make it easier for users to operate Merge results