Releases: apache/linkis
Linkis 0.11.0 release(Not Apache Release)
0.11.0 is an important version led by MobTech with the help of WeBank.
This is the next release of Linkis 0.10.0 base line. It contains 20 bug fixes, improvements and enhancements since 0.10.0.
Enhancement
- [Linkis-363] Elasticsearch Engine Support for Linkis.
- [Linkis-380] Presto Engine Support for Linkis.
- [Linkis-PR-381] Support Presto resource management
- [Linkis-294] Support spark cluster deploy mode.
- [Linkis-322] Support custom spark configuration.
- [Linkis-353] Obtain yarn queues with permission.
- [Linkis-PR-448] Gateway adds routing rules based on datasource.
- [Linkis-323] The Spark engine log prints ApplicationID.
- [Linkis-PR-348] Support loop replacement in parameter variable replacement.
- [Linkis-PR-512] PySpark tasks adapt to Spark3.0.
- [Linkis-PR-533] Shell engine supports custom variables.
- [Linkis-PR-546] Shell engine supports output result set.
Bug Fix
- [Linkis-187] Fix the problem of resource moreThan judgment logic.
- [Linkis-320] Fixed an abnormal problem when the name is too long during LDAP authentication.
- [Linkis-454] Fix CDH Spark version acquisition problem.
- [Linkis-344] Fix the problem that different users of JDBC tasks affect each other.
- [Linkis-361] Fix the problem of JDBC engine writing result set.
- [Linkis-PR-348] Fix the java.library.path configuration in HiveQLProcessBuilder.
- [Linkis-PR-348] Fix the problem of sharing functions to the specified directory.
- [Linkis-PR-537] Modify the length of the name field in the linkis_udf_tree table.
- [Linkis-489] Fix the problem of user cookie loss after accessing single sign-on.
- [Linkis-PR-538] Fix the bug of token verification when the third-party system integrates linkis and uses token authentication.
Credits
The release of Linkis 0.11.0 is inseparable from the contributors of the WeDataSphere community. They selflessly contribute their codes and actively carry out technical exchanges with community partners. With their help, Linkis 0.11.0 can be successfully released. Thank you all Contributors to the community! Rank in no particular order:
wForget: Implement elasticsearch engine and gateway routing plugin , implement other enhancements and fix some bugs.
Yogaflre: Implement presto engine and presto resource management.
77954309: Shell engine supports custom variables.
xfei6868: PySpark tasks adapt to Spark3.0.
hcl3039359: Modify the length of the name field in the linkis_udf_tree table.
linweijiang: Fix the problem of user cookie loss after accessing single sign-on.
cumtcwf: Fix the bug of token verification.
zhangxhmuye: Support shell engine result set output.
0.11.0 是在微众银行的倾力帮助下,由 MobTech 主导完成的一个重要版本。
这是基于 Linkis 0.10.0 的下一个发行版本。自 0.10.0 起,本次版本包含了 20 个Bug修复,改进和增强。
特性增强
- [Linkis-363] 新增 ElasticSearch 引擎支持。
- [Linkis-380] 新增 Presto 引擎支持。
- [Linkis-PR-381] 支持 Presto 资源控制
- [Linkis-294] 支持 Spark Cluster 模式。
- [Linkis-322] 支持自定义 Spark 配置。
- [Linkis-353] 支持获取有权限的Yarn队列。
- [Linkis-PR-448] 网关添加基于 DataSource 路由规则
- [Linkis-323] Spark引擎日志打印ApplicationID。
- [Linkis-PR-348] 在参数变量替换时支持循环替换
- [Linkis-PR-512] PySpark 任务适配 Spark3.0。
- [Linkis-PR-533] Shell 引擎支持自定义变量
- [Linkis-PR-546] Shell 引擎支持输出结果集
Bug修复
- [Linkis-187] 修复 Resource moreThan 判断逻辑问题。
- [Linkis-320] 修复 LDAP 认证时,名称太长导致异常问题。
- [Linkis-454] 修复 CDH Spark 版本获取问题。
- [Linkis-344] 修复 JDBC 任务不同用户相互影响的问题。
- [Linkis-361] 修复 JDBC 引擎写结果集问题。
- [Linkis-PR-348] 修复 HiveQLProcessBuilder 中 java.library.path 配置。
- [Linkis-PR-348] 修复共享函数到指定目录问题。
- [Linkis-PR-537] 修改 linkis_udf_tree 表中 name 字段长度。
- [Linkis-489] 修复接入单点登录后用户 cookie 丢失问题。
- [Linkis-PR-538] 修复第三方系统集成 linkis,使用 Token 认证方式时 Token 校验的 Bug。
新贡献者
Linkis 0.11.0 的发布,离不开WeDataSphere社区的贡献者,他们无私地贡献自己的代码,积极地与社区伙伴进行技术交流,有了他们的助力,Linkis 0.11.0 才能顺利地发布,在此感谢各位社区的贡献者! 排名不分先后:
wForget: 实现 ElasticSearch 引擎,实现网关路由插件,其他若干功能实现以及BUG修复。
Yogaflre: 实现 Presto 引擎,Presto 资源控制。
77954309: Shell 引擎支持自定义变量。
xfei6868: PySpark 任务适配 Spark3.0。
hcl3039359: 修改 linkis_udf_tree 表中 name 字段长度。
linweijiang: 修复接入单点登录后用户 cookie 丢失问题。
cumtcwf: 修复 Token 校验的 Bug。
zhangxhmuye: 支持 Shell 引擎结果集输出。
云资源
以下为Linkis安装包资源:
https://osp-1257653870.cos.ap-guangzhou.myqcloud.com/WeDatasphere/Linkis/0.11.0/wedatasphere-linkis-0.11.0-dist.tar.gz
Linkis 0.10.0 release(Not Apache Release)
0.10.0 is an important version led by Ihome with the help of WeBank.
This is the next release of linkis 0.9.4 base line. This version aims to expand the capabilities of Linkis in containerization by introducing the features of Linkis on kubernetes, reducing everyone a lot of time for compiling, excluding JARs and adapting to the new environment, allowing Linkis to achieve high SLA and satisfy 7*24 business requirement. It provides financial-grade reliable capabilities for everyone’s big data platform.
It contains 1 bug fixes, improvements and enhancements since 0.9.4.
Enhancement
-
[Linkis-440] linkis on kubernetes enhancement
-
[Linkis-440] use configmap as config server enhancement
-
[Linkis-440] use ingress-nginx to guard nginx ha enhancement
-
[Linkis-440] use fluentd-es to collector micro-service log enhancement
-
[Linkis-440] put front static file in kubernetes pod enhancement
-
[Linkis-440] adapt k8s network model,register pod ip to register center as service discover
-
[Linkis-440] use envirment to pass eurekaURL,jvm params
-
[Linkis-440] removing implicit dependencies
-
[Linkis-440] spark-engine pod use static route and ip masque to communicate with spark executor
Bug Fix
- [Linkis-460] fix the bug of bml stored filename.
Credits
The release of linkis 0.10.0 is inseparable from the contributors of the WeDataSphere community. They selflessly contribute their codes and actively carry out technical exchanges with community partners. With their help, linkis 0.10.0 can be successfully released. Thank you all Contributors to the community! Rank in no particular order:
brianzhangrong: As a host, contributed the entire plan of linkis on kubernetes.
irusist: bml stored filename
chongchuanbing: removing implicit dependencies
lidongzhangg: use configmap as config server
hds1999: use envirment to pass eurekaURL,jvm params
Upgrade Wizard
This version mainly provides the features of Linkis on kubernetes. If you want to use the containerization capabilities of this version, please visit: Linkis Containerization Deployment Guide
Cloud Resource
-
Linkis docker image packages
URL: https://hub.docker.com/repository/docker/wedatasphere/linkis
Password: Personal account and password
0.10.0是在微众银行的倾力帮助下,由艾佳生活主导完成的一个重要版本。
这是基于linkis 0.9.4的下一个发行版本。 该版本旨在通过引入Linkis on kubernetes 的特性,扩充Linkis在容器化这块的能力,节省大家编译、排包和适配新环境的大量时间,让Linkis做到高SLA,满足业务7*24使用的要求,为大家的大数据平台提供金融级可靠能力。
自0.9.4起,本次版本包含 1 个Bug修复,改进和增强。
特性增强
-
[Linkis-440] linkis on kubernetes 特性。
-
[Linkis-440] kubernetes configmap 作为配置中心特性。
-
[Linkis-440] 使用 ingress-nginx 把nginx 放入kubernetes集群内。
-
[Linkis-440] 使用 fluentd-es 采集 micro-service日志
。 -
[Linkis-440] 前端静态文件放入k8s集群内。
-
[Linkis-440] 适配k8s 网络模型,注册 pod ip 到注册中心用作服务发现。
-
[Linkis-440] 使用环境变量传递 eurekaURL,jvm params。
-
[Linkis-440] 去除各微服务对cloudmodule的隐式依赖。
-
[Linkis-440] 静态路由+ip 地址伪装,解决spark executor 到 spark-engine的通信问题。
Bug修复
- [Linkis-440] 修复bml文件名重复的缺陷。
新贡献者
Linkis 0.10.0 的发布,离不开WeDataSphere社区的贡献者,他们无私地贡献自己的代码,积极地与社区伙伴进行技术交流,有了他们的助力,Linkis 0.10.0才能顺利地发布,在此感谢各位社区的贡献者! 排名不分先后:
brianzhangrong: 作为主持者,贡献了linkis on kubernetes的整个方案。
irusist: bml stored filename bug定位
chongchuanbing: 移除对cloud-module的隐式依赖
lidongzhangg: linkis.properties配置移到configmap,使用configmap作为配置中心
hds1999:使用环境变量传递eurekaUrl,jvm 参数到docker
使用向导
本次版本主要是提供了Linkis on kubernetes 的特性,如果您想使用本版本的容器化能力,请访问:Linkis 容器化部署指南
云资源
- Linkis docker镜像包
链接:https://hub.docker.com/repository/docker/wedatasphere/linkis
密码:个人账号密码
Linkis 0.9.4 release(Not Apache Release)
Enhancement
- [Linkis-371] Add the client module of the unified context service.
- [Linkis-372] Support the search function of the unified context service.
- [Linkis-376] Add the server module of the unified context service.
- [Linkis-375] Support the persistence of the unified context services.
- [Linkis-368] Add caching mechanism and data structure module of the unified context service.
- [Linkis-428] The Linkis execution module is tranformed based on the unified context service.
- [Linkis-432] Add the unified context service common module.
- [Linkis-434] The Linkis-gateway module is transformed based on the unified context service to support the HA of the context service server.
- [Linkis-388] Optimize the http dependency module, remove the dispatch dependency, and optimize the dependency of the Linkis underlying package.
- [Linkis-395] Replace the queue of ListenerBus to the implementation of ArrayBlockingQueue by default.
- [Linkis-297] Add a common module for data source management.
- [Linkis-298] Add mysql database data source management module.
- [Linkis-299] Add data source management server module.
- [Linkis-300] Add the Hive data source service module.
- [Linkis-301] Add ElasticSearch data source service module.
- [Linkis-302] Add a common module for data source management.
- [Linkis-303] Add data source management server module.
Bug Fix
- [Linkis-406] Modify the directory structure of the data source module to optimize compilation.
- [Linkis-407] Fix the bug that the ds field is duplicated in the wizard-built table.
- [Linkis-383] Modify the test class of Linkis ujes client to adapt to the update of Linkis.
- [Linkis-438] Fix Linkis0.9.4 compilation problem.
Credits
The release of Linkis0.9.4 is inseparable from the contributors of the WeDataSphere community. They selflessly contribute their codes and actively carry out technical exchanges with community partners. With their help, Linkis0.9.4 can be successfully released. Thank you all Contributors to the community! Rank in no particular order:
liangqilang: Contributed the code for httpclient to replace dispatch
Davidhua1996: Contributed to the basic modules and function implementations of data source management and metadata management.
zhengfan199525:Added hive's datasource management module.
alexzyWu: Contributed to the server module of data source management.
jaminlu:Optimized the directory structure and compilation of datasource.
bleachzk: Contributed the common module of data source management.
liaoyt:Added rest interface for datasource management.
SelfImpr001: Contributed part of the data source management.
特性增强
- [Linkis-371] 添加上下文服务的客户端模块。
- [Linkis-372] 支持上下文服务的搜索功能。
- [Linkis-376] 添加上下文服务的服务端模块。
- [Linkis-375] 支持上下文服务的持久化。
- [Linkis-368] 添加上下文服务的缓存机制和数据结构模块。
- [Linkis-428] Linkis执行模块基于上下文服务进行改造。
- [Linkis-432] 添加上下文服务公共模块。
- [Linkis-434] Linkis-gateway模块基于上下文服务进行改造,支持上下文服务器的HA。
- [Linkis-388] http依赖模块的优化,去除dispatch依赖,并优化Linkis底层包的依赖。
- [Linkis-395] 将ListenerBus的queue默认改为ArrayBlockingQueue的实现。
- [Linkis-297] 添加数据源管理的公共模块。
- [Linkis-298] 添加mysql数据库数据源管理模块。
- [Linkis-299] 添加数据源管理服务端模块。
- [Linkis-300] 添加Hive数据源服务模块。
- [Linkis-301] 添加ElasticSearch数据源服务模块。
- [Linkis-302] 添加数据源管理公共模块。
- [Linkis-303] 添加数据源管理服务端模块。
Bug修复
- [Linkis-406] 修改数据源模块的目录结构,优化编译。
- [Linkis-407] 修复向导性建表中ds字段重复的bug。
- [Linkis-383] 修改Linkis ujes client的测试类适配Linkis的更新。
- [Linkis-438] Linkis0.9.4编译问题修复。
新贡献者
Linkis0.9.4的发布,离不开WeDataSphere社区的贡献者,他们无私地贡献自己的代码,积极地与社区伙伴进行技术交流,有了他们的助力,Linkis0.9.4才能顺利地发布,在此感谢各位社区的贡献者! 排名不分先后:
liangqilang:贡献了httpclient替换dispatch的代码
Davidhua1996: 贡献了数据源管理和元数据管理的基础模块和功能实现.
zhengfan199525:添加了hive的数据源管理模块.
alexzyWu: 贡献了数据源管理的服务端模块.
jaminlu:优化了数据源管理的目录结构和编译.
bleachzk: 贡献了数据源管理的公共模块.
liaoyt:添加数据源管理的rest接口.
SelfImpr001: 贡献了数据源管理的部分内容.
0.9.3(Not Apache Release)
Enhancement
- [Linkis-260] One-click deployment enhancements.
- [Linkis-283] Added Linkis shell engine type, support for submitting shell scripts.
- [Linkis-277] Multiple hive versions support, including hive1.x, 2.x, 3.x, etc.
- [Linkis-254] Added support for JDBC script types for creating and opening JDBC type scripts in WorkSpace module.
- [Linkis-249] New JDBC engine parameter configuration information in the database.
- [Linkis-243] Add user control module, support customized user login and user registration services.
- [Linkis-271] Remove unrelated hive and http dependencies in the metadata module.
- [Linkis-256] Remove lagacy code from the metadata module.
Bug Fix
- [Linkis-258] Null pointer exception when reading JDBC engine configuration parameters from Linkis console.
- [Linkis-252] Fix JDBC engine entry parser and engine manager not registered issue.
- [Linkis-248] Fix if RMEventConsumer is not registered in RMConsumerListener, null pointer will be reported when checkConsumerHealthy method is called.
- [Linkis-246] StorageResultSetWriter toString method needs to add schema information.
- [Linkis-236] Fix configuration not updated during publicservice installation.
- [Linkis-238] Fix jsr311-api dependency package introduction exception.
- [Linkis-281] Resolving Jasper package conflicts, which causes service interface 404 issues.
- [Linkis-273] Fixed SOF issues caused by regular expressions with SQL comment removal.
- [Linkis-291] Fix shell engine script type definition error, remove useless dependencies.
Credits
Last but not least, this release would not have been possible without the following contributors:
nimuyuhan: Fixed SOF issues caused by regular expressions with SQL comment removal
wForget: Multiple versions of hive support ,includes hive1.x, 2.x, 3.x, etc and Multiple bug fixes.
特性增强
- [Linkis-260] 一键部署增强。
- [Linkis-283] 新增Linkis的shell引擎类型,支持提交shell脚本。
- [Linkis-277] hive引擎的多版本支持,包括hive1.x,2.x,3.x等。
- [Linkis-254] WorkSpace模块新增对JDBC脚本类型的支持,用于新建和打开JDBC类型脚本。
- [Linkis-249] 数据库新增JDBC引擎参数配置信息。
- [Linkis-243] 添加用户控制模块,支持自定义用户登录和用户新增服务。
- [Linkis-271] 移除metadata模块中无关的hive和http依赖。
- [Linkis-256] 删除元数据模块中与逻辑无关的旧代码。
Bug修复
- [Linkis-258] 从Linkis控制台读取JDBC引擎配置参数出现空指针异常。
- [Linkis-252] 修复JDBC引擎入口解析器和引擎管理器未注册问题。
- [Linkis-248] 修复RMEventConsumer 未注册到 RMConsumerListener 中,调用 checkConsumerHealthy 方法时报空指针。
- [Linkis-246] StorageResultSetWriter toString方法需要添加schema信息。
- [Linkis-236] 修复publicservice安装时配置未更新。
- [Linkis-238] jsr311-api依赖包引入异常。
- [Linkis-281] 解决Jasper包冲突导致服务接口404问题。
- [Linkis-273] 修复SQL注释移除的正则表达式导致的SOF问题。
- [Linkis-291] 修复shell引擎脚本类型定义错误,去除无用依赖。
贡献者
最后但是最重要的一点是,如果没有以下贡献者不可能发布此版本,在此感谢各位社区的贡献者! 排名不分先后(按字母排序):
nimuyuhan: 修复SQL注释移除正则表达式SOF问题.
wForget: 增加了支持Hive多版本的特性和多个Bug修复.
云资源
我们提供了DSS + Linkis + Qualitis + Visualis + Azkaban【全家桶一键部署安装包】,由于安装包过大(1.3GB),Github下载缓慢,请通过以下方式获取:
Baidu cloud:
- 百度云链接:https://pan.baidu.com/s/1hmxuJtyY72D5X_dZoQIE5g
- Password: p82h
Tencent Cloud:
- 腾讯云链接:https://share.weiyun.com/5vpLr9t
- Password: upqgib
以下为Linkis安装包资源:
- 腾讯云链接:https://share.weiyun.com/5Gjz0zU
- 密码:9vctqg
- 百度云链:https://pan.baidu.com/s/1uuogWgLE9r8EcGROkRNeKg
- 密码:pwbz
0.9.2(Not Apache Release)
Enhancement
- [Linkis-193] Deployment enhancements: Modify the install.sh script to support distributed and stand-alone installations.
- [Linkis-194] Dependency check: Check dependent linux commands before installing, and exit directly if no corresponding command is found.
- [Linkis-195] Service detection: Add the content to check whether all services start normally in the start-all.sh script.
- [Linkis-191] move hadoop dependencies from core/linkis-common to core/hadoop-common module.
- [Linkis-192] move httpclient dependencies from core/linkis-common to core/linkis-httpclient module.
- [Linkis-196] Modify the default port of microservices starting from 9100.
- [Linkis-197] Unified memory configuration and modify the default memory to 512m.
- [Linkis-198] ResourceManager request the rest interface of yarn to increase parameter configuration.
- [Linkis-199] unify the start script of microservices.
- [Linkis-200] to compile publicly dependent modules separately into a directory to reduce the size of the installation package.
- [Linkis-201] Add the hadoop / spark / hive configuration directory to the linkis configuration file by default.
- [Linkis-208 ] Add version check content in install.sh script.
- [Linkis-221 ] linkis-RM supports more hadoop versions.
- [Linkis-174 ] add hive postgresql metadata support.
Bug Fix
- [Linkis-175] update yarn's default queue name be changed from ide to default.
- [Linkis-202] deleted the hdfs dependency of errorCodemanager.
- [Linkis-205] Canonical sql comments.
- [Linkis-210] Fix bml hdfs permissions issue.
- [Linkis-218] remove the yum command in install.sh and start-all.sh.
- [Linkis-226] exclude Jackson packages in the metadata.
- [Linkis-229] exclude Jasper packages in public-module.
Credits
Last but not least, this release would not have been possible without the following contributors:
allwefantasy: Added MLSQL engine.
chenxi0599: Fix filesystem print error in log.
wForget:Provides a release version of CDH5.7.6 for linkis.
hj2016: Fixed a bug caused by maxEffectiveCapacity parameter not being present when Yarn gets the maximum resources for Capacity scheduling..
houjunxiong: Enhanced metadata support for meta database type of Hive to postgresql.
Just-do-it-Fan: Enhanced ResourceManager module support for Yarn's Capacity scheduling.
liangqilang: Fix the bug that NullPointException is thrown when downloading files from httpclient and add the function of streaming download result set, etc..
leisore: Fix start and stop script naming issues.
nimuyuhan: Fix configuration file does not support Chinese bug.
patinousward: Fix multiple bugs and add table building function in metadata module.
zhanghaicheng1: Increase deployment and installation issue documentation.
增强
- [Linkis-193] 部署增强:脚本会自动识别是单机部署还是分布式部署,单机部署和启动时,不再使用SSH和SCP。
- [Linkis-194] 环境检查:在安装Linkis之前先检查依赖的所有Linux命令和环境,如果检查失败,则直接退出安装部署。
- [Linkis-195] 服务检测:在start-all脚本里面对服务是否正常启动进行检查,如果服务不正常则给出错误日志。
- [Linkis-191] 依赖优化:将hadoop相关工具类从公共core/common模块移到单独的core/hadoop-common模块,减少包冲突问题.
- [Linkis-192] 依赖优化:core/common模块不再依赖httpclient相关jar包,统一移到core/linkis-httpclient模块.
- [Linkis-196] 统一分配Linkis各个服务的端口,从9100开始分配.
- [Linkis-197] Linkis支持通过配置SERVER_HEAP_SIZE环境变量,统一指定所有微服务的堆内存大小.
- [Linkis-198] Linkis-RM除了通过读取yarn-site.xml自动获取Yarn的JMX URL的方式之外,新增一种通过参数配置的方式获取。
- [Linkis-199] 统一微服务的启动脚本,方便后续统一修改.
- [Linkis-200] 将公共依赖的模块放到public-module,减少包冲突和降低安装包大小.
- [Linkis-201] 默认将hadoop/hive/spark环境变量写到各个微服务的配置文件中.
- [Linkis-208 ] 在安装脚本中新增haoop/hive/spark版本兼容性检测,如果版本不兼容,会提醒用户是否选择继续安装.
- [Linkis-221 ] 增强Linkis-RM模块,使RM可以获取更多hadoop版本的Yarn队列信息.
- [Linkis-174 ] 增强metadata模块,使metadata支持访问postgresql类型的hive元数据库.
修复
- [Linkis-175] 将yarn的默认队列从ide修改为default.
- [Linkis-202] 移除Entrance错误码功能对hdfs的依赖,使精简版可以完全不依赖hadoop.
- [Linkis-205] SQL注释优化,在--后面加空格.
- [Linkis-210] 修复BML访问hdfs的权限问题.
- [Linkis-218] 从start和install脚本中移除yum命令,兼容更多的unix系统.
- [Linkis-226] 优化metadata模块,使引入的hive包,排除对Jackson包的依赖,降低包冲突风险.
- [Linkis-229] 排除公共依赖public-module中的Jasper包,降低包冲突风险.
- [Linkis-143] Linkis的配置文件增加对中文配置的支持.
百度云下载:
点击跳转百度云 提取码: iyya
贡献者
最后但是最重要的一点是,如果没有以下贡献者不可能发布此版本,在此感谢各位社区的贡献者!
排名不分先后(按字母排序):
allwefantasy: 增加了MLSQL引擎.
chenxi0599: 修复filesystem在日志打印错误问题.
hj2016: 解决当Yarn为Capacity调度获取最大资源时,由于maxEffectiveCapacity参数不存在导致的bug.
houjunxiong: 增强模块metadata对hive的元数据库类型为postgresql的支持.
Just-do-it-Fan: 增加ResourceManager模块对Yarn的Capacity调度支持.
leisore: 修复start和stop脚本命名问题.
liangqilang: 修复httpclient下载文件抛NullPointException的bug和增加流传输下载结果集功能,等多个特性.
nimuyuhan: 修复配置文件不支持中文bug.
patinousward: 修复多个bug,并在metadata模块中增加建表功能.
wForget:提供了linkis的CDH5.7.6可用的发布版本.
zhanghaicheng1: 增加部署安装问题文档.
0.9.1(Not Apache Release)
Baidu Cloud Download Extraction code: iyya
Linkis upgrade from 0.9.0 to 0.9.1 guide
Enhancement
- [Linkis-123] Add BML module,it is a bigdata material library which allows users to upload and download resource
- [Linkis-62] External system support access to Linkis through proxy user.
- [Linkis-69] Linkis should support divide publicService into mutiple micro-services.
- [Linkis-112] Metadata module enhancement,Supports custom table building functions, etc..
- [Linkis-120] Workspace module enhancement,Supports to save or parse script file in BML.
- [Linkis-72] Linkis httpClient enhance the ability of upload.
- [Linkis-109] Linkis-ujes-client module feature optimization.
- [Linkis-111] It is recommended to modify spring's basePackages to com.webank.wedatasphere.
- [Linkis-116] Enhanced installation deployment script.
- [Linkis-138] Enhanced adaptation to more spark versions
- [Linkis-139] Enhanced linkis support for kerberos,Support users can log in with host and without host
- [Linkis-100 ]Compatible with Yarn's Fair and Capacity scheduling
Bug Fix
- [Linkis-71] Reflections in DWSHttpMessageFactory do not specify a class loader, resulting in class failure to load.
- [Linkis-74] DB script garbled causes installation error.
- [Linkis-87] UJES-Client failed to get log.
- [Linkis-108] Engine result set default path with IP and port causes path creation failure on hdfs.
- [Linkis-129] Spark execution of scala code will appear the following exception:Class org.apache.hadoop.net.StandardSocketFactory not found.
- [Linkis-132] Jobhistory did not return the source field.
- [Linkis-134] The interface of jobhistory needs to be restricted.
增强
- [Linkis-123] 增加物料库模块功能:系统和用户级物料管理,可分享和流转,支持物料全生命周期自动管理
- [Linkis-62] 支持外部系统通过代理模式进行访问.
- [Linkis-69] Linkis支持将publicservice拆分成多个微服务.
- [Linkis-112] metadata模块增加,加入用户向导建表功能..
- [Linkis-120] 工作空间模块增强,支持解析和保存BML的脚本.
- [Linkis-72] Linkis httpClient 增强upload功能.
- [Linkis-109] Linkis-ujes-client模块功能增强,支持直接传入engineType和runtype字符串.
- [Linkis-111] 修改启动类中包扫描为com.webank.wedatasphere.
- [Linkis-116] 优化安装和启动脚本.
- [Linkis-138] 支持多Spark版本编译
- [Linkis-139] 优化kerberos功能支持带host和不带host的方式进行访问
- [Linkis-100 ]兼容Yarn的Fair和Capacity调度
修复
- [Linkis-71] 修复DWSHttpMessageFactory 中Reflections不指定类加载器,导致类加载不到的问题.
- [Linkis-74] 修复DB脚本由于编译导致的乱码问题.
- [Linkis-87] 修复ujes-client获取日志的问题.
- [Linkis-108] 修复engine结果集默认路径带IP和端口导致在hdfs上面创建路径失败.
- [Linkis-129] 修复Spark引擎执行scala时报Class org.apache.hadoop.net.StandardSocketFactory not found.
- [Linkis-132] 修复Jobhistory不能返回source字段的问题.
- [Linkis-134] 修复jobHistory的接口没有做权限隔离的问题.
百度云下载:
点击跳转百度云 提取码: iyya
0.9.0(Not Apache Release)
Enhancement
- [Linkis-41] Support for connecting to Linkis via JDBC.
- [Linkis-49] JDBC Engine Support for Linkis, easily connect to TiDB, MySql, PostgreSQL, etc.
- [Linkis-47] Actively load user env in install.sh and start-all.sh.
- [Linkis-44] Add the hdfs root directory and local directory for the deployment user in the install.sh script.
- [Linkis-38] In the import table data phase of install.sh, add the danger prompt identifier.
Bug Fix
- [Linkis-55] AbstractHttpClient does not clean up the thread pool when close.
- [Linkis-48] Resolve the wrong command in stop-all.sh, change stop-database.sh to stop-metadata.sh.
- [Linkis-45] NullPointException occurred when the file of linkis.properties does not exist.
- [Linkis-43] Fix windows line breaks can't be used in Linux environment.
增强
- [Linkis-41] 提供JDBC驱动包,支持通过JDBC的方式访问Linkis。
- [Linkis-49] 提供JDBC引擎,可直接对接TiDB、MySQL和PostgreSQL等关系型数据库。
- [Linkis-47] 当执行install.sh和start-all.sh脚本时,自动加载用户系统环境变量。
- [Linkis-44] 安装时自动为安装用户创建HDFS用户根目录。
- [Linkis-38] 在install.sh的导入表数据阶段,增加危险提示标识。
修复
0.8.0(Not Apache Release)
Enhancement
- Support SSO.
- The resultSets of jobs is all changed to write on the engine side.
- Downloading resultSet is controlled by parameters, unlimit allowed.
- Rename query of module name to jobHistory.
- Rename database of module name to metadata.
Bug Fix
- Fix bug in identification of multi-layer inheritance relationship between super and sub node in Yarn queue.
- Add default value for spark.driver.cores when it is not exists in spark-default.conf.
- Fix bug in spark job progress.
- Fix the NullPointerException when print resultSet in logs caused by the resultSet is path.
增强
- 支持SSO单点登录。
- 结果集全部改为由engine端写入HDFS。
- 下载结果集功能,由之前只能下载5000条,到现在由参数控制,允许无限制。
- 重命名query模块为jobHistory模块。
- 重命名database模块为metadata模块。
修复
- 修复获取Yarn多层级队列时报队列不存在的BUG。
- 当spark-default.conf没有指定spark.driver.cores时,默认设为1。
- 修复spark的进度BUG,使spark进度更实时准确。
- 修复在日志中打印结果集信息时,如果结果集是一个文件会造成NullPointerException的BUG。
0.7.0(Not Apache Release)
Enhancement
- Support for Spark 2.0 and above.
- Install.sh optimization: divided into lite, simple and standard versions; key installation steps will verify success.
Bug Fix
- Fix compatibility issues with SQL table construction statements.
- Fix the problem that the Spark engine will use database after it starts.
增强
- 支持Spark2.0以上的所有版本
- install.sh脚本优化:分为精简版、简单版和标准版;关键安装步骤会校验是否成功
修复
- 修复SQL建表语句的兼容性问题
- 修复Spark引擎启动后会use库的问题
0.6.0(Not Apache Release)
Enhancement
- Enhanced the database URL connection configuration issue. Now it will read HIVE_META_URL from config, and only switches to hive-site.xml if the previous not exists.
Bug Fix
- Removed some incompatible code for printing status of Hive Engine.
- Resolved the compatibility issue about fetching queue information from Yarn Scheduler of ResourceManager
- Resolved the distributed deployment issue of install.sh, which failed to startup PublicService
- Resolved the issue that some microservices failed to print logs in some environments
- Optimized the import grammar of ddl.sql
- Adapted with the Hadoop and Hive in CDH version. Resolved the issue that Restful requests failed to be received.
增强
- 增强MetaDataQuery的数据库URL连接设置问题。现在会先读config的HIVE_META_URL,不存在才会去读hive-site.xml
修复
- 去掉Hive引擎打印进度的部分不兼容代码
- 解决ResourceManager访问Yarn Scheduler队列信息的兼容性问题
- 解决install.sh的分布式部署,不能成功启动PublicService的问题
- 解决某些环境下,某些微服务日志不打印的问题
- 优化ddl.sql的导入数据库语法
- 适配CDH版本的Hadoop和Hive,解决Restful请求不被接收的问题