Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
5d97f46
[Feature][Connector-redis] fix redis cluster bug and add cluster e2e
Sep 16, 2025
120a2d3
Merge branch 'apache:dev' into redis-fix-bug-add-cluster-e2e
JeremyXin Sep 16, 2025
466a78d
[Feature][Connector-redis] fix code style
Sep 16, 2025
5bd90da
Merge remote-tracking branch 'origin/redis-fix-bug-add-cluster-e2e' i…
Sep 16, 2025
21658da
[Feature][Connector-redis] fix code style
Sep 16, 2025
9dc1794
[Feature][Connector-redis] fix ci error
Sep 17, 2025
3532ccf
[Feature][Connector-redis] update `getCustomKey` and cluster scan method
Sep 18, 2025
8ebcb90
Merge branch 'apache:dev' into redis-fix-bug-add-cluster-e2e
JeremyXin Sep 22, 2025
11a9628
[Feature][Connector-redis] add Jedis cache in cluster mode
Sep 25, 2025
65b3536
[Feature][Connector-redis] use `${}` as a uniform placeholder
Sep 26, 2025
72873c7
[Feature][Connector-redis] use lazy mode to initialize JedisWrapper a…
Oct 4, 2025
86e2ecb
Merge branch 'dev' into redis-fix-bug-add-cluster-e2e
JeremyXin Oct 4, 2025
836872c
[Feature][Connector-redis] fix ci error
Oct 4, 2025
e3a0500
[Feature][Connector-redis] fix ci error
Oct 5, 2025
06924ae
Merge branch 'apache:dev' into dev
JeremyXin Oct 9, 2025
673d109
Merge branch 'apache:dev' into redis-fix-bug-add-cluster-e2e
JeremyXin Oct 9, 2025
76b5a70
Merge branch 'dev' into redis-fix-bug-add-cluster-e2e
Oct 9, 2025
7162406
Merge remote-tracking branch 'origin/redis-fix-bug-add-cluster-e2e' i…
Oct 9, 2025
cd2fb91
[Feature][Connector-redis] revise e2e max retry value
Oct 9, 2025
fb9d65a
Retrigger workflow
Oct 9, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 12 additions & 2 deletions docs/en/connector-v2/sink/Redis.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ Used to write data to Redis.
| value_field | string | no | - |
| hash_key_field | string | no | - |
| hash_value_field | string | no | - |
| field_delimiter | string | no | ',' |
| common-options | | no | - |

### host [string]
Expand Down Expand Up @@ -119,7 +120,7 @@ redis nodes information, used in cluster mode, must like as the following format

### format [string]

The format of upstream data, now only support `json`, `text` will be supported later, default `json`.
The format of upstream data, currently support `json`, `text` format, default `json`.

When you assign format is `json`, for example:

Expand All @@ -134,9 +135,18 @@ Connector will generate data as the following and write it to redis:
```json

{"code": 200, "data": "get success", "success": "true"}
```

when you assign format is `text`, and set field_delimiter to `#`, connector will generate data as the following and write it to redis:
```text
200#get success#true
```

### field_delimiter [string]
Field delimiter, used to tell connector how to slice and dice fields.

Currently, only need to be configured when format is `text`. default is ",".

### expire [long]

Set redis expiration time, the unit is second. The default value is -1, keys do not automatically expire by default.
Expand Down Expand Up @@ -219,7 +229,7 @@ custom key:
Redis {
host = localhost
port = 6379
key = "name:{name}"
key = "name:${name}"
support_custom_key = true
data_type = key
}
Expand Down
34 changes: 29 additions & 5 deletions docs/en/connector-v2/source/Redis.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ Used to read data from Redis.
| schema | config | yes when format=json | - |
| format | string | no | json |
| single_field_name | string | yes when read_key_enabled=true | - |
| field_delimiter | string | no | ',' |
| common-options | | no | - |

### host [string]
Expand Down Expand Up @@ -252,21 +253,44 @@ connector will generate data as the following:
| ---- | ----------- | ------- |
| 200 | get success | true |

when you assign format is `text`, connector will do nothing for upstream data, for example:
when you assign format is `text`, you can choose to specify the schema information or not.

upstream data is the following:

```json
{"code": 200, "data": "get success", "success": true}
For example, upstream data is the following:

```text
200#get success#true
```

If you do not assign data schema connector will treat the upstream data as the following:

| content |
| -------------------------------------------------------- |
| 200#get success#true |

If you assign data schema, you should also assign the option `schema` and `field_delimiter` as following:

```hocon
field_delimiter = "#"
schema {
fields {
code = int
data = string
success = boolean
}
}

```
Copy link
Member

@zhangshenghang zhangshenghang Sep 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a question. Text format has been applied to Redis here. However, it is still not supported in HTTP. Do we need to modify the HTTP connector separately in the future? @Hisoka-X

https://github.com/apache/seatunnel/blob/02c7eb3177989bcd50ba6c1059862c1586d3fa39/docs/en/connector-v2/source/Http.md?plain=1#L176C1-L177C1

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we can do the same thing in HTTP. LocalFile already did this.

connector will generate data as the following:

| content |
| -------------------------------------------------------- |
| {"code": 200, "data": "get success", "success": true} |

### field_delimiter [string]
Field delimiter, used to tell connector how to slice and dice fields.

Currently, only need to be configured when format is text. default is ",".

### schema [config]

#### fields [config]
Expand Down
49 changes: 31 additions & 18 deletions docs/zh/connector-v2/sink/Redis.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,22 +17,23 @@ import ChangeLog from '../changelog/connector-redis.md';
| name | type | required | default value |
|--------------------|---------|-----------------------|---------------|
| host | string | `mode=single`时必须 | - |
| port | int | no | 6379 |
| key | string | yes | - |
| data_type | string | yes | - |
| batch_size | int | no | 10 |
| user | string | no | - |
| auth | string | no | - |
| db_num | int | no | 0 |
| mode | string | no | single |
| nodes | list | yes when mode=cluster | - |
| format | string | no | json |
| expire | long | no | -1 |
| support_custom_key | boolean | no | false |
| value_field | string | no | - |
| hash_key_field | string | no | - |
| hash_value_field | string | no | - |
| common-options | | no | - |
| port | int | 否 | 6379 |
| key | string | 是 | - |
| data_type | string | 是 | - |
| batch_size | int | 否 | 10 |
| user | string | 否 | - |
| auth | string | 否 | - |
| db_num | int | 否 | 0 |
| mode | string | 否 | single |
| nodes | list | `mode=cluster`时必须 | - |
| format | string | 否 | json |
| expire | long | 否 | -1 |
| support_custom_key | boolean | 否 | false |
| value_field | string | 否 | - |
| hash_key_field | string | 否 | - |
| hash_value_field | string | 否 | - |
| field_delimiter | string | 否 | "," |
| common-options | | 否 | - |

### host [string]

Expand Down Expand Up @@ -114,7 +115,7 @@ Redis 节点信息,在集群模式下使用,必须按如下格式:

### format [string]

上游数据的格式,目前只支持 `json`,以后会支持 `text`,默认 `json`。
上游数据的格式,目前只支持 `json`,`text`,默认 `json`。

当你指定格式为 `json` 时,例如:

Expand All @@ -130,6 +131,18 @@ Redis 节点信息,在集群模式下使用,必须按如下格式:
{"code": 200, "data": "获取成功", "success": "true"}
```

当你指定format为`text`,并设置field_delimiter为`#`时,连接器将生成如下数据并将其写入redis:

```text
200#get success#true
```

### field_delimiter [string]
字段分隔符,用于告诉连接器如何分割字段。

目前仅当格式为text时需要配置。默认为","。


### expire [long]

设置 Redis 的过期时间,单位为秒。默认值为 -1,表示键不会自动过期。
Expand Down Expand Up @@ -210,7 +223,7 @@ Redis {
Redis {
host = localhost
port = 6379
key = "name:{name}"
key = "name:${name}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should do some special case for redis to make sure legacy behavior work fine too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done. In the getCustomKey method, placeholders in the old version format are compatible

support_custom_key = true
data_type = key
}
Expand Down
34 changes: 29 additions & 5 deletions docs/zh/connector-v2/source/Redis.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ import ChangeLog from '../changelog/connector-redis.md';
| nodes | list | `mode=cluster` 时必须 | - |
| schema | config | `format=json` 时必须 | - |
| format | string | 否 | json |
| field_delimiter | string | 否 | ',' |
| common-options | | 否 | - |

### host [string]
Expand Down Expand Up @@ -203,21 +204,44 @@ schema {
| ---- | ----------- | ------- |
| 200 | get success | true |

当指定格式为 `text` 时,连接器不会对上游数据做任何处理,例如:
当指定格式为 `text` 时,可以选择是否指定schema参数。

当上游数据如下时:
例如, 当上游数据如下时:

```json
{"code": 200, "data": "get success", "success": true}
```text
200#get success#true
```

如果不指定schema参数,连接器将按照以下方式处理上游数据:

| content |
| -------------------------------------------------------- |
| 200#get success#true |

如果指定schema参数,此时需要同时配置`schema`和`field_delimiter`,如下所示:
```hocon
field_delimiter = "#"
schema {
fields {
code = int
data = string
success = boolean
}
}

```

连接器将会生成如下格式数据
连接器将生成如下数据

| content |
| -------------------------------------------------------- |
| {"code": 200, "data": "get success", "success": true} |

### field_delimiter [string]
字段分隔符,用于告诉连接器如何分割字段。

目前仅当格式为text时需要配置。默认为","。

### schema [config]

#### fields [config]
Expand Down
6 changes: 6 additions & 0 deletions seatunnel-connectors-v2/connector-redis/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,12 @@
<version>${project.version}</version>
</dependency>

<dependency>
<groupId>org.apache.seatunnel</groupId>
<artifactId>seatunnel-format-text</artifactId>
<version>${project.version}</version>
</dependency>

<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
import java.util.Map;
import java.util.Set;

public abstract class RedisClient extends Jedis {
public abstract class RedisClient {

protected final RedisParameters redisParameters;

Expand Down Expand Up @@ -62,14 +62,14 @@ private ScanResult<String> scanByRedisVersion(
if (redisVersion <= REDIS_5) {
return scanOnRedis5(cursor, scanParams, type);
} else {
return jedis.scan(cursor, scanParams, type.name());
return scanKeyResult(cursor, scanParams, type);
}
}

// When the version is earlier than redis5, scan command does not support type
private ScanResult<String> scanOnRedis5(
String cursor, ScanParams scanParams, RedisDataType type) {
ScanResult<String> scanResult = jedis.scan(cursor, scanParams);
ScanResult<String> scanResult = scanKeyResult(cursor, scanParams, null);
String resultCursor = scanResult.getCursor();
List<String> keys = scanResult.getResult();
List<String> typeKeys = new ArrayList<>(keys.size());
Expand All @@ -82,6 +82,15 @@ private ScanResult<String> scanOnRedis5(
return new ScanResult<>(resultCursor, typeKeys);
}

public void close() {
if (jedis != null) {
jedis.close();
}
}

public abstract ScanResult<String> scanKeyResult(
String cursor, ScanParams scanParams, RedisDataType type);

public abstract List<String> batchGetString(List<String> keys);

public abstract List<List<String>> batchGetList(List<String> keys);
Expand Down
Loading