Skip to content

[Improve][Doc] Unify the header format and fix some documents with abnormal formats #9159

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 14, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink-common-options.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ When the job configuration `plugin_input` you must set the `plugin_output` param

## Task Example

### Simple:
### Simple

> This is the process of passing a data source through two transforms and returning two different pipiles to different sinks

Expand Down
4 changes: 2 additions & 2 deletions docs/en/connector-v2/sink/Cloudberry.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ Key options include:

## Task Example

### Simple:
### Simple

```hocon
env {
Expand Down Expand Up @@ -114,7 +114,7 @@ sink {
}
```

### Exactly-once:
### Exactly-once

```hocon
sink {
Expand Down
4 changes: 2 additions & 2 deletions docs/en/connector-v2/sink/Console.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Used to send data to Console. Both support streaming and batch mode.

## Task Example

### Simple:
### Simple

> This is a randomly generated data, written to the console, with a degree of parallelism of 1

Expand Down Expand Up @@ -63,7 +63,7 @@ sink {
}
```

### Multiple Sources Simple:
### Multiple Sources Simple

> This is a multiple source and you can specify a data source to write to the specified end

Expand Down
4 changes: 2 additions & 2 deletions docs/en/connector-v2/sink/DB2.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ semantics (using XA transaction guarantee).

## Task Example

### Simple:
### Simple

> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target table is test_table will also be 16 rows of data in the table. Before run this job, you need create database test and table test_table in your DB2. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.

Expand Down Expand Up @@ -153,7 +153,7 @@ sink {
}
```

### Exactly-once :
### Exactly-once

> For accurate write scene we guarantee accurate once

Expand Down
4 changes: 2 additions & 2 deletions docs/en/connector-v2/sink/Doris.md
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@ Otherwise, if you enable the 2pc by the property `sink.enable-2pc=true`.The `sin

## Task Example

### Simple:
### Simple

> The following example describes writing multiple data types to Doris, and users need to create corresponding tables downstream

Expand Down Expand Up @@ -234,7 +234,7 @@ sink {
}
```

### CDC(Change Data Capture) Event:
### CDC(Change Data Capture) Event

> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to Doris Sink,FakeSource simulates CDC data with schema, score (int type),Doris needs to create a table sink named test.e2e_table_sink and a corresponding table for it.

Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink/Feishu.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Used to launch Feishu web hooks using data.

## Task Example

### Simple:
### Simple

```hocon
Feishu {
Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink/HdfsFile.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ Output data to hdfs file

## Task Example

### Simple:
### Simple

> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to Hdfs.

Expand Down
6 changes: 3 additions & 3 deletions docs/en/connector-v2/sink/Iceberg.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ libfb303-xxx.jar

## Task Example

### Simple:
### Simple

```hocon
env {
Expand Down Expand Up @@ -128,7 +128,7 @@ sink {
}
```

### Hive Catalog:
### Hive Catalog

```hocon
sink {
Expand All @@ -154,7 +154,7 @@ sink {
}
```

### Hadoop catalog:
### Hadoop catalog

```hocon
sink {
Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink/Kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ This function by `MessageContentPartitioner` class implements `org.apache.kafka.

## Task Example

### Simple:
### Simple

> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to Kafka Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target topic is test_topic will also be 16 rows of data in the topic. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.

Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink/Kingbase.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ import ChangeLog from '../changelog/connector-jdbc.md';

## Task Example

### Simple:
### Simple

> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends
> it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having 12 fields. The final target table is test_table will also be 16 rows of data in the table.
Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink/Kudu.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ import ChangeLog from '../changelog/connector-kudu.md';

## Task Example

### Simple:
### Simple

> The following example refers to a FakeSource named "kudu" cdc write kudu table "kudu_sink_table"

Expand Down
4 changes: 2 additions & 2 deletions docs/en/connector-v2/sink/Mysql.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ semantics (using XA transaction guarantee).

## Task Example

### Simple:
### Simple

> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target table is test_table will also be 16 rows of data in the table. Before run this job, you need create database test and table test_table in your mysql. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.

Expand Down Expand Up @@ -164,7 +164,7 @@ sink {
}
```

### Exactly-once :
### Exactly-once

> For accurate write scene we guarantee accurate once

Expand Down
2 changes: 2 additions & 0 deletions docs/en/connector-v2/sink/Neo4j.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,8 @@ sink {

## WriteBatchExample
> The unwind keyword provided by cypher supports batch writing, and the default variable for a batch of data is batch. If you write a batch write statement, then you should declare cypher:unwind $batch as row to do someting


```
sink {
Neo4j {
Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink/OceanBase.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ Write data through jdbc. Support Batch mode and Streaming mode, support concurre

## Task Example

### Simple:
### Simple

> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target table is test_table will also be 16 rows of data in the table. Before run this job, you need create database test and table test_table in your mysql. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.

Expand Down
4 changes: 2 additions & 2 deletions docs/en/connector-v2/sink/Oracle.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ semantics (using XA transaction guarantee).

## Task Example

### Simple:
### Simple

> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target table is test_table will also be 16 rows of data in the table. Before run this job, you need create database test and table test_table in your Oracle. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.

Expand Down Expand Up @@ -162,7 +162,7 @@ sink {
}
```

### Exactly-once :
### Exactly-once

> For accurate write scene we guarantee accurate once

Expand Down
4 changes: 2 additions & 2 deletions docs/en/connector-v2/sink/PostgreSql.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ When data_save_mode selects CUSTOM_PROCESSING, you should fill in the CUSTOM_SQL

## Task Example

### Simple:
### Simple

> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target table is test_table will also be 16 rows of data in the table. Before run this job, you need create database test and table test_table in your PostgreSQL. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.

Expand Down Expand Up @@ -208,7 +208,7 @@ sink {
}
```

### Exactly-once :
### Exactly-once

> For accurate write scene we guarantee accurate once

Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink/Pulsar.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ Source plugin common parameters, please refer to [Source Common Options](../sink

## Task Example

### Simple:
### Simple

> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to Pulsar Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target topic is test_topic will also be 16 rows of data in the topic. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.

Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink/Redshift.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ semantics (using XA transaction guarantee).

## Task Example

### Simple:
### Simple

```
sink {
Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink/S3File.md
Original file line number Diff line number Diff line change
Expand Up @@ -314,7 +314,7 @@ The encoding of the file to write. This param will be parsed by `Charset.forName

## Example

### Simple:
### Simple

> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to S3File Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target s3 dir will also create a file and all of the data in write in it.
> Before run this job, you need create s3 path: /seatunnel/text. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.
Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink/SelectDB-Cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ The supported formats include CSV and JSON

## Task Example

### Simple:
### Simple

> The following example describes writing multiple data types to SelectDBCloud, and users need to create corresponding tables downstream

Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink/Slack.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ All data types are mapped to string.

## Task Example

### Simple:
### Simple

```hocon
sink {
Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink/Snowflake.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ Write data through jdbc. Support Batch mode and Streaming mode, support concurre
>
## Task Example

### simple:
### simple

> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target table is test_table will also be 16 rows of data in the table. Before run this job, you need create database test and table test_table in your snowflake database. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.

Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink/SqlServer.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ semantics (using XA transaction guarantee).

## Task Example

### simple:
### simple

> This is one that reads Sqlserver data and inserts it directly into another table

Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink/StarRocks.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ The supported formats include CSV and JSON

## Task Example

### Simple:
### Simple

> The following example describes writing multiple data types to StarRocks, and users need to create corresponding tables downstream

Expand Down
4 changes: 2 additions & 2 deletions docs/en/connector-v2/sink/Vertica.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ semantics (using XA transaction guarantee).

## Task Example

### Simple:
### Simple

> This example defines a SeaTunnel synchronization task that automatically generates data through FakeSource and sends it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target table is test_table will also be 16 rows of data in the table. Before run this job, you need create database test and table test_table in your vertical. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in [Install SeaTunnel](../../start-v2/locally/deployment.md) to install and deploy SeaTunnel. And then follow the instructions in [Quick Start With SeaTunnel Engine](../../start-v2/locally/quick-start-seatunnel-engine.md) to run this job.

Expand Down Expand Up @@ -161,7 +161,7 @@ sink {
}
```

### Exactly-once :
### Exactly-once

> For accurate write scene we guarantee accurate once

Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/source-common-options.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ When the job configuration `plugin_output` you must set the `plugin_input` param

## Task Example

### Simple:
### Simple

> This registers a stream or batch data source and returns the table name `fake_table` at registration

Expand Down
6 changes: 3 additions & 3 deletions docs/en/connector-v2/source/Cloudberry.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ Cloudberry supports parallel reading following the same rules as PostgreSQL conn

## Task Example

### Simple:
### Simple

```hocon
env {
Expand All @@ -90,7 +90,7 @@ sink {
}
```

### Parallel reading with table_path:
### Parallel reading with table_path

```hocon
env {
Expand All @@ -114,7 +114,7 @@ sink {
}
```

### Multiple table read:
### Multiple table read

```hocon
env {
Expand Down
6 changes: 3 additions & 3 deletions docs/en/connector-v2/source/DB2.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ Read external data source data through JDBC.

## Task Example

### Simple:
### Simple

> This example queries type_bin 'table' 16 data in your test "database" in single parallel and queries all of its fields. You can also specify which fields to query for final output to the console.

Expand Down Expand Up @@ -119,7 +119,7 @@ sink {
}
```

### Parallel:
### Parallel

> Read your query table in parallel with the shard field you configured and the shard data You can do this if you want to read the whole table

Expand All @@ -141,7 +141,7 @@ source {
}
```

### Parallel Boundary:
### Parallel Boundary

> It is more efficient to specify the data within the upper and lower bounds of the query It is more efficient to read your data source according to the upper and lower boundaries you configured

Expand Down
Loading
Loading