Skip to content

Commit 7f8ad52

Browse files
Hisoka-Xjia zhang
authored and
jia zhang
committed
[Improve][Doc] Unify the header format and fix some documents with abnormal formats (apache#9159)
1 parent 54780d0 commit 7f8ad52

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

77 files changed

+173
-169
lines changed

Diff for: docs/en/connector-v2/sink-common-options.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ When the job configuration `plugin_input` you must set the `plugin_output` param
2222

2323
## Task Example
2424

25-
### Simple:
25+
### Simple
2626

2727
> This is the process of passing a data source through two transforms and returning two different pipiles to different sinks
2828

Diff for: docs/en/connector-v2/sink/Cloudberry.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ Key options include:
6464

6565
## Task Example
6666

67-
### Simple:
67+
### Simple
6868

6969
```hocon
7070
env {
@@ -114,7 +114,7 @@ sink {
114114
}
115115
```
116116

117-
### Exactly-once:
117+
### Exactly-once
118118

119119
```hocon
120120
sink {

Diff for: docs/en/connector-v2/sink/Console.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ Used to send data to Console. Both support streaming and batch mode.
3434

3535
## Task Example
3636

37-
### Simple:
37+
### Simple
3838

3939
> This is a randomly generated data, written to the console, with a degree of parallelism of 1
4040
@@ -63,7 +63,7 @@ sink {
6363
}
6464
```
6565

66-
### Multiple Sources Simple:
66+
### Multiple Sources Simple
6767

6868
> This is a multiple source and you can specify a data source to write to the specified end
6969

Diff for: docs/en/connector-v2/sink/DB2.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ semantics (using XA transaction guarantee).
8888
8989
## Task Example
9090

91-
### Simple:
91+
### Simple
9292

9393
> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target table is test_table will also be 16 rows of data in the table. Before run this job, you need create database test and table test_table in your DB2. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.
9494
@@ -153,7 +153,7 @@ sink {
153153
}
154154
```
155155

156-
### Exactly-once :
156+
### Exactly-once
157157

158158
> For accurate write scene we guarantee accurate once
159159

Diff for: docs/en/connector-v2/sink/Doris.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -176,7 +176,7 @@ Otherwise, if you enable the 2pc by the property `sink.enable-2pc=true`.The `sin
176176

177177
## Task Example
178178

179-
### Simple:
179+
### Simple
180180

181181
> The following example describes writing multiple data types to Doris, and users need to create corresponding tables downstream
182182
@@ -234,7 +234,7 @@ sink {
234234
}
235235
```
236236

237-
### CDC(Change Data Capture) Event:
237+
### CDC(Change Data Capture) Event
238238

239239
> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to Doris Sink,FakeSource simulates CDC data with schema, score (int type),Doris needs to create a table sink named test.e2e_table_sink and a corresponding table for it.
240240

Diff for: docs/en/connector-v2/sink/Feishu.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ Used to launch Feishu web hooks using data.
5252

5353
## Task Example
5454

55-
### Simple:
55+
### Simple
5656

5757
```hocon
5858
Feishu {

Diff for: docs/en/connector-v2/sink/HdfsFile.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ Output data to hdfs file
8686
8787
## Task Example
8888

89-
### Simple:
89+
### Simple
9090

9191
> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to Hdfs.
9292

Diff for: docs/en/connector-v2/sink/Iceberg.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ libfb303-xxx.jar
8383

8484
## Task Example
8585

86-
### Simple:
86+
### Simple
8787

8888
```hocon
8989
env {
@@ -128,7 +128,7 @@ sink {
128128
}
129129
```
130130

131-
### Hive Catalog:
131+
### Hive Catalog
132132

133133
```hocon
134134
sink {
@@ -154,7 +154,7 @@ sink {
154154
}
155155
```
156156

157-
### Hadoop catalog:
157+
### Hadoop catalog
158158

159159
```hocon
160160
sink {

Diff for: docs/en/connector-v2/sink/Kafka.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ This function by `MessageContentPartitioner` class implements `org.apache.kafka.
9999

100100
## Task Example
101101

102-
### Simple:
102+
### Simple
103103

104104
> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to Kafka Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target topic is test_topic will also be 16 rows of data in the topic. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.
105105

Diff for: docs/en/connector-v2/sink/Kingbase.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ import ChangeLog from '../changelog/connector-jdbc.md';
8585
8686
## Task Example
8787

88-
### Simple:
88+
### Simple
8989

9090
> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends
9191
> it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having 12 fields. The final target table is test_table will also be 16 rows of data in the table.

Diff for: docs/en/connector-v2/sink/Kudu.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ import ChangeLog from '../changelog/connector-kudu.md';
5757

5858
## Task Example
5959

60-
### Simple:
60+
### Simple
6161

6262
> The following example refers to a FakeSource named "kudu" cdc write kudu table "kudu_sink_table"
6363

Diff for: docs/en/connector-v2/sink/Mysql.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ semantics (using XA transaction guarantee).
9999
100100
## Task Example
101101

102-
### Simple:
102+
### Simple
103103

104104
> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target table is test_table will also be 16 rows of data in the table. Before run this job, you need create database test and table test_table in your mysql. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.
105105
@@ -164,7 +164,7 @@ sink {
164164
}
165165
```
166166

167-
### Exactly-once :
167+
### Exactly-once
168168

169169
> For accurate write scene we guarantee accurate once
170170

Diff for: docs/en/connector-v2/sink/Neo4j.md

+2
Original file line numberDiff line numberDiff line change
@@ -118,6 +118,8 @@ sink {
118118

119119
## WriteBatchExample
120120
> The unwind keyword provided by cypher supports batch writing, and the default variable for a batch of data is batch. If you write a batch write statement, then you should declare cypher:unwind $batch as row to do someting
121+
122+
121123
```
122124
sink {
123125
Neo4j {

Diff for: docs/en/connector-v2/sink/OceanBase.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@ Write data through jdbc. Support Batch mode and Streaming mode, support concurre
9898
9999
## Task Example
100100

101-
### Simple:
101+
### Simple
102102

103103
> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target table is test_table will also be 16 rows of data in the table. Before run this job, you need create database test and table test_table in your mysql. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.
104104

Diff for: docs/en/connector-v2/sink/Oracle.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@ semantics (using XA transaction guarantee).
9898
9999
## Task Example
100100

101-
### Simple:
101+
### Simple
102102

103103
> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target table is test_table will also be 16 rows of data in the table. Before run this job, you need create database test and table test_table in your Oracle. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.
104104
@@ -162,7 +162,7 @@ sink {
162162
}
163163
```
164164

165-
### Exactly-once :
165+
### Exactly-once
166166

167167
> For accurate write scene we guarantee accurate once
168168

Diff for: docs/en/connector-v2/sink/PostgreSql.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -142,7 +142,7 @@ When data_save_mode selects CUSTOM_PROCESSING, you should fill in the CUSTOM_SQL
142142
143143
## Task Example
144144

145-
### Simple:
145+
### Simple
146146

147147
> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target table is test_table will also be 16 rows of data in the table. Before run this job, you need create database test and table test_table in your PostgreSQL. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.
148148
@@ -208,7 +208,7 @@ sink {
208208
}
209209
```
210210

211-
### Exactly-once :
211+
### Exactly-once
212212

213213
> For accurate write scene we guarantee accurate once
214214

Diff for: docs/en/connector-v2/sink/Pulsar.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,7 @@ Source plugin common parameters, please refer to [Source Common Options](../sink
132132

133133
## Task Example
134134

135-
### Simple:
135+
### Simple
136136

137137
> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to Pulsar Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target topic is test_topic will also be 16 rows of data in the topic. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.
138138

Diff for: docs/en/connector-v2/sink/Redshift.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ semantics (using XA transaction guarantee).
6060

6161
## Task Example
6262

63-
### Simple:
63+
### Simple
6464

6565
```
6666
sink {

Diff for: docs/en/connector-v2/sink/S3File.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -314,7 +314,7 @@ The encoding of the file to write. This param will be parsed by `Charset.forName
314314

315315
## Example
316316

317-
### Simple:
317+
### Simple
318318

319319
> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to S3File Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target s3 dir will also create a file and all of the data in write in it.
320320
> Before run this job, you need create s3 path: /seatunnel/text. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.

Diff for: docs/en/connector-v2/sink/SelectDB-Cloud.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ The supported formats include CSV and JSON
7979

8080
## Task Example
8181

82-
### Simple:
82+
### Simple
8383

8484
> The following example describes writing multiple data types to SelectDBCloud, and users need to create corresponding tables downstream
8585

Diff for: docs/en/connector-v2/sink/Slack.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ All data types are mapped to string.
3636

3737
## Task Example
3838

39-
### Simple:
39+
### Simple
4040

4141
```hocon
4242
sink {

Diff for: docs/en/connector-v2/sink/Snowflake.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ Write data through jdbc. Support Batch mode and Streaming mode, support concurre
7777
>
7878
## Task Example
7979

80-
### simple:
80+
### simple
8181

8282
> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target table is test_table will also be 16 rows of data in the table. Before run this job, you need create database test and table test_table in your snowflake database. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.
8383

Diff for: docs/en/connector-v2/sink/SqlServer.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ semantics (using XA transaction guarantee).
9696
9797
## Task Example
9898

99-
### simple:
99+
### simple
100100

101101
> This is one that reads Sqlserver data and inserts it directly into another table
102102

Diff for: docs/en/connector-v2/sink/StarRocks.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -165,7 +165,7 @@ The supported formats include CSV and JSON
165165

166166
## Task Example
167167

168-
### Simple:
168+
### Simple
169169

170170
> The following example describes writing multiple data types to StarRocks, and users need to create corresponding tables downstream
171171

Diff for: docs/en/connector-v2/sink/Vertica.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ semantics (using XA transaction guarantee).
9696
9797
## Task Example
9898

99-
### Simple:
99+
### Simple
100100

101101
> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target table is test_table will also be 16 rows of data in the table. Before run this job, you need create database test and table test_table in your vertical. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.
102102
@@ -161,7 +161,7 @@ sink {
161161
}
162162
```
163163

164-
### Exactly-once :
164+
### Exactly-once
165165

166166
> For accurate write scene we guarantee accurate once
167167

Diff for: docs/en/connector-v2/source-common-options.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ When the job configuration `plugin_output` you must set the `plugin_input` param
2323

2424
## Task Example
2525

26-
### Simple:
26+
### Simple
2727

2828
> This registers a stream or batch data source and returns the table name `fake_table` at registration
2929

Diff for: docs/en/connector-v2/source/Cloudberry.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ Cloudberry supports parallel reading following the same rules as PostgreSQL conn
6767

6868
## Task Example
6969

70-
### Simple:
70+
### Simple
7171

7272
```hocon
7373
env {
@@ -90,7 +90,7 @@ sink {
9090
}
9191
```
9292

93-
### Parallel reading with table_path:
93+
### Parallel reading with table_path
9494

9595
```hocon
9696
env {
@@ -114,7 +114,7 @@ sink {
114114
}
115115
```
116116

117-
### Multiple table read:
117+
### Multiple table read
118118

119119
```hocon
120120
env {

Diff for: docs/en/connector-v2/source/DB2.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ Read external data source data through JDBC.
8888
8989
## Task Example
9090

91-
### Simple:
91+
### Simple
9292

9393
> This example queries type_bin 'table' 16 data in your test "database" in single parallel and queries all of its fields. You can also specify which fields to query for final output to the console.
9494
@@ -119,7 +119,7 @@ sink {
119119
}
120120
```
121121

122-
### Parallel:
122+
### Parallel
123123

124124
> Read your query table in parallel with the shard field you configured and the shard data You can do this if you want to read the whole table
125125
@@ -141,7 +141,7 @@ source {
141141
}
142142
```
143143

144-
### Parallel Boundary:
144+
### Parallel Boundary
145145

146146
> It is more efficient to specify the data within the upper and lower bounds of the query It is more efficient to read your data source according to the upper and lower boundaries you configured
147147

0 commit comments

Comments
 (0)