Skip to content

Commit

Permalink
fix
Browse files Browse the repository at this point in the history
  • Loading branch information
DongLiang-0 committed Jan 2, 2025
1 parent 21568f4 commit af7d808
Show file tree
Hide file tree
Showing 2 changed files with 26 additions and 26 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" -X
```

操作 Connector
```
```shell
# 查看 connector 状态
curl -i http://127.0.0.1:8083/connectors/test-doris-sink-cluster/status -X GET
# 删除当前 connector
Expand All @@ -169,7 +169,7 @@ curl -i http://127.0.0.1:8083/connectors/test-doris-sink-cluster/tasks/0/restart

### 访问 SSL 认证的 Kafka 集群
通过 kafka-connect 访问 SSL 认证的 Kafka 集群需要用户提供用于认证 Kafka Broker 公钥的证书文件(client.truststore.jks)。您可以在 `connect-distributed.properties` 文件中增加以下配置:
```
```properties
# Connect worker
security.protocol=SSL
ssl.truststore.location=/var/ssl/private/client.truststore.jks
Expand All @@ -185,7 +185,7 @@ consumer.ssl.truststore.password=test1234

### 死信队列
默认情况下,转换过程中或转换过程中遇到的任何错误都会导致连接器失败。每个连接器配置还可以通过跳过它们来容忍此类错误,可选择将每个错误和失败操作的详细信息以及有问题的记录(具有不同级别的详细信息)写入死信队列以便记录。
```
```properties
errors.tolerance=all
errors.deadletterqueue.topic.name=test_error_topic
errors.deadletterqueue.context.headers.enable=true
Expand Down Expand Up @@ -268,7 +268,7 @@ Doris-kafka-connector 使用逻辑或原始类型映射来解析列的数据类

1. 导入数据样本<br />
在 Kafka 中,有以下样本数据
```
```bash
kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test-data-topic --from-beginning
{"user_id":1,"name":"Emily","age":25}
{"user_id":2,"name":"Benjamin","age":35}
Expand All @@ -284,7 +284,7 @@ Doris-kafka-connector 使用逻辑或原始类型映射来解析列的数据类

2. 创建需要导入的表<br />
在 Doris 中,创建被导入的表,具体语法如下
```
```sql
CREATE TABLE test_db.test_kafka_connector_tbl(
user_id BIGINT NOT NULL COMMENT "user id",
name VARCHAR(20) COMMENT "name",
Expand All @@ -296,7 +296,7 @@ Doris-kafka-connector 使用逻辑或原始类型映射来解析列的数据类

3. 创建导入任务<br />
在部署 Kafka-connect 的机器上,通过 curl 命令提交如下导入任务
```
```shell
curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" -X POST -d '{
"name":"test-doris-sink-cluster",
"config":{
Expand All @@ -321,7 +321,7 @@ Doris-kafka-connector 使用逻辑或原始类型映射来解析列的数据类

### 同步 Debezium 组件采集的数据
1. MySQL 数据库中有如下表
```
```sql
CREATE TABLE test.test_user (
user_id int NOT NULL ,
name varchar(20),
Expand All @@ -335,7 +335,7 @@ insert into test.test_user values(3,'wangwu',22);
```

2. 在 Doris 创建被导入的表
```
```sql
CREATE TABLE test_db.test_user(
user_id BIGINT NOT NULL COMMENT "user id",
name VARCHAR(20) COMMENT "name",
Expand All @@ -347,7 +347,7 @@ insert into test.test_user values(3,'wangwu',22);
3. 部署 Debezium connector for MySQL 组件,参考:[Debezium connector for MySQL](https://debezium.io/documentation/reference/stable/connectors/mysql.html)
4. 创建 doris-kafka-connector 导入任务<br />
假设通过 Debezium 采集到的 MySQL 表数据在 `mysql_debezium.test.test_user` Topic 中
```
```shell
curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" -X POST -d '{
"name":"test-debezium-doris-sink",
"config":{
Expand All @@ -373,7 +373,7 @@ insert into test.test_user values(3,'wangwu',22);
```

### 同步 Avro 序列化数据
```
```shell
curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" -X POST -d '{
"name":"doris-avro-test",
"config":{
Expand All @@ -400,7 +400,7 @@ curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" -X
```

### 同步 Protobuf 序列化数据
```
```shell
curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" -X POST -d '{
"name":"doris-protobuf-test",
"config":{
Expand Down Expand Up @@ -428,7 +428,7 @@ curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" -X

## 常见问题
**1. 读取 JSON 类型的数据报如下错误:**
```
```shell
Caused by: org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:337)
at org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:91)
Expand All @@ -446,7 +446,7 @@ Caused by: org.apache.kafka.connect.errors.DataException: JsonConverter with sch

**2. 消费超时,消费者被踢出消费群组:**

```
```shell
org.apache.kafka.clients.consumer.CommitFailedException: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:1318)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.doCommitOffsetsAsync(ConsumerCoordinator.java:1127)
Expand Down
26 changes: 13 additions & 13 deletions versioned_docs/version-1.2/ecosystem/doris-kafka-connector.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" -X
```

Operation Connector
```
```shell
# View connector status
curl -i http://127.0.0.1:8083/connectors/test-doris-sink-cluster/status -X GET
# Delete connector
Expand All @@ -168,7 +168,7 @@ Note that when kafka-connect is started for the first time, three topics `config

### Access an SSL-certified Kafka cluster
Accessing an SSL-certified Kafka cluster through kafka-connect requires the user to provide a certificate file (client.truststore.jks) used to authenticate the Kafka Broker public key. You can add the following configuration in the `connect-distributed.properties` file:
```
```properties
# Connect worker
security.protocol=SSL
ssl.truststore.location=/var/ssl/private/client.truststore.jks
Expand All @@ -184,7 +184,7 @@ For instructions on configuring a Kafka cluster connected to SSL authentication

### Dead letter queue
By default, any errors encountered during or during the conversion will cause the connector to fail. Each connector configuration can also tolerate such errors by skipping them, optionally writing the details of each error and failed operation as well as the records in question (with varying levels of detail) to a dead-letter queue for logging.
```
```properties
errors.tolerance=all
errors.deadletterqueue.topic.name=test_error_topic
errors.deadletterqueue.context.headers.enable=true
Expand Down Expand Up @@ -268,7 +268,7 @@ Doris-kafka-connector uses logical or primitive type mapping to resolve the colu

1. Import data sample<br />
In Kafka, there is the following sample data
```
```shell
kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test-data-topic --from-beginning
{"user_id":1,"name":"Emily","age":25}
{"user_id":2,"name":"Benjamin","age":35}
Expand All @@ -284,7 +284,7 @@ Doris-kafka-connector uses logical or primitive type mapping to resolve the colu

2. Create the table that needs to be imported<br />
In Doris, create the imported table, the specific syntax is as follows
```
```sql
CREATE TABLE test_db.test_kafka_connector_tbl(
user_id BIGINT NOT NULL COMMENT "user id",
name VARCHAR(20) COMMENT "name",
Expand All @@ -296,7 +296,7 @@ Doris-kafka-connector uses logical or primitive type mapping to resolve the colu

3. Create an import task<br />
On the machine where Kafka-connect is deployed, submit the following import task through the curl command
```
```shell
curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" -X POST -d '{
"name":"test-doris-sink-cluster",
"config":{
Expand All @@ -321,7 +321,7 @@ Doris-kafka-connector uses logical or primitive type mapping to resolve the colu

### Load data collected by Debezium components
1. The MySQL database has the following table
```
```sql
CREATE TABLE test.test_user (
user_id int NOT NULL ,
name varchar(20),
Expand All @@ -335,7 +335,7 @@ insert into test.test_user values(3,'wangwu',22);
```

2. Create the imported table in Doris
```
```sql
CREATE TABLE test_db.test_user(
user_id BIGINT NOT NULL COMMENT "user id",
name VARCHAR(20) COMMENT "name",
Expand All @@ -347,7 +347,7 @@ insert into test.test_user values(3,'wangwu',22);
3. Deploy the Debezium connector for MySQL component, refer to: [Debezium connector for MySQL](https://debezium.io/documentation/reference/stable/connectors/mysql.html)
4. Create doris-kafka-connector import task<br />
Assume that the MySQL table data collected through Debezium is in the `mysql_debezium.test.test_user` Topic
```
```shell
curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" -X POST -d '{
"name":"test-debezium-doris-sink",
"config":{
Expand All @@ -373,7 +373,7 @@ insert into test.test_user values(3,'wangwu',22);
```

### Load Avro serialized data
```
```shell
curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" -X POST -d '{
"name":"doris-avro-test",
"config":{
Expand All @@ -400,7 +400,7 @@ curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" -X
```

### Load Protobuf serialized data
```
```shell
curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" -X POST -d '{
"name":"doris-protobuf-test",
"config":{
Expand Down Expand Up @@ -428,7 +428,7 @@ curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" -X

## FAQ
**1. The following error occurs when reading Json type data:**
```
```shell
Caused by: org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:337)
at org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:91)
Expand All @@ -446,7 +446,7 @@ This is because using the `org.apache.kafka.connect.json.JsonConverter` converte

**2. The consumption times out and the consumer is kicked out of the consumption group:**

```
```shell
org.apache.kafka.clients.consumer.CommitFailedException: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:1318)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.doCommitOffsetsAsync(ConsumerCoordinator.java:1127)
Expand Down

0 comments on commit af7d808

Please sign in to comment.