import ChangeLog from '../changelog/connector-jdbc.md';
JDBC sink connector
Write data through jdbc. Support Batch mode and Streaming mode, support concurrent writing, support exactly-once semantics (using XA transaction guarantee).
- You need to ensure that the jdbc driver jar package has been placed in directory
${SEATUNNEL_HOME}/plugins/
.
- You need to ensure that the jdbc driver jar package has been placed in directory
${SEATUNNEL_HOME}/lib/
.
Use Xa transactions
to ensure exactly-once
. So only support exactly-once
for the database which is
support Xa transactions
. You can set is_exactly_once=true
to enable it.
Name | Type | Required | Default |
---|---|---|---|
url | String | Yes | - |
driver | String | Yes | - |
user | String | No | - |
password | String | No | - |
query | String | No | - |
compatible_mode | String | No | - |
database | String | No | - |
table | String | No | - |
primary_keys | Array | No | - |
support_upsert_by_query_primary_key_exist | Boolean | No | false |
connection_check_timeout_sec | Int | No | 30 |
max_retries | Int | No | 0 |
batch_size | Int | No | 1000 |
is_exactly_once | Boolean | No | false |
generate_sink_sql | Boolean | No | false |
xa_data_source_class_name | String | No | - |
max_commit_attempts | Int | No | 3 |
transaction_timeout_sec | Int | No | -1 |
auto_commit | Boolean | No | true |
field_ide | String | No | - |
properties | Map | No | - |
common-options | No | - | |
schema_save_mode | Enum | No | CREATE_SCHEMA_WHEN_NOT_EXIST |
data_save_mode | Enum | No | APPEND_DATA |
custom_sql | String | No | - |
enable_upsert | Boolean | No | true |
create_index | Boolean | No | true |
use_copy_statement | Boolean | No | false |
write_mode | Enum | No | sql |
temp_table_name | String | No | - |
temp_column_batch_code | String | No | - |
temp_column_row_kind | String | No | - |
The jdbc class name used to connect to the remote data source, if you use MySQL the value is com.mysql.cj.jdbc.Driver
.
userName
password
The URL of the JDBC connection. Refer to a case: jdbc:postgresql://localhost/test
Use this sql write upstream input datas to database. e.g INSERT ...
The compatible mode of database, required when the database supports multiple compatible modes.
For example, when using OceanBase database, you need to set it to 'mysql' or 'oracle'. when using StarRocks, you need set it to starrocks
.
Postgres 9.5 version or below,please set it to postgresLow
to support cdc
Use this database
and table-name
auto-generate sql and receive upstream input datas write to database.
This option is mutually exclusive with query
and has a higher priority.
Use database
and this table-name
auto-generate sql and receive upstream input datas write to database.
This option is mutually exclusive with query
and has a higher priority.
The table parameter can fill in the name of an unwilling table, which will eventually be used as the table name of the creation table, and supports variables (${table_name}
, ${schema_name}
). Replacement rules: ${schema_name}
will replace the SCHEMA name passed to the target side, and ${table_name}
will replace the name of the table passed to the table at the target side.
mysql sink for example:
- test_${schema_name}_${table_name}_test
- sink_sinktable
- ss_${table_name}
pgsql (Oracle Sqlserver ...) Sink for example:
- ${schema_name}.${table_name} _test
- dbo.tt_${table_name} _sink
- public.sink_table
Tip: If the target database has the concept of SCHEMA, the table parameter must be written as xxx.xxx
This option is used to support operations such as insert
, delete
, and update
when automatically generate sql.
Choose to use INSERT sql, UPDATE sql to process update events(INSERT, UPDATE_AFTER) based on query primary key exists. This configuration is only used when database unsupported upsert syntax. Note: that this method has low performance
The time in seconds to wait for the database operation used to validate the connection to complete.
The number of retries to submit failed (executeBatch)
For batch writing, when the number of buffered records reaches the number of batch_size
or the time reaches checkpoint.interval
, the data will be flushed into the database
Whether to enable exactly-once semantics, which will use Xa transactions. If on, you need to
set xa_data_source_class_name
.
Generate sql statements based on the database table you want to write to
The xa data source class name of the database Driver, for example, mysql is com.mysql.cj.jdbc.MysqlXADataSource
, and
please refer to appendix for other data sources
The number of retries for transaction commit failures
The timeout after the transaction is opened, the default is -1 (never timeout). Note that setting the timeout may affect exactly-once semantics
Automatic transaction commit is enabled by default
The field "field_ide" is used to identify whether the field needs to be converted to uppercase or lowercase when synchronizing from the source to the sink. "ORIGINAL" indicates no conversion is needed, "UPPERCASE" indicates conversion to uppercase, and "LOWERCASE" indicates conversion to lowercase.
Additional connection configuration parameters,when properties and URL have the same parameters, the priority is determined by the
specific implementation of the driver. For example, in MySQL, properties take precedence over the URL.
Sink plugin common parameters, please refer to Sink Common Options for details
Before the synchronous task is turned on, different treatment schemes are selected for the existing surface structure of the target side.
Option introduction:
RECREATE_SCHEMA
:Will create when the table does not exist, delete and rebuild when the table is saved
CREATE_SCHEMA_WHEN_NOT_EXIST
:Will Created when the table does not exist, skipped when the table is saved
ERROR_WHEN_SCHEMA_NOT_EXIST
:Error will be reported when the table does not exist
IGNORE
:Ignore the treatment of the table
Before the synchronous task is turned on, different processing schemes are selected for data existing data on the target side.
Option introduction:
DROP_DATA
: Preserve database structure and delete data
APPEND_DATA
:Preserve database structure, preserve data
CUSTOM_PROCESSING
:User defined processing
ERROR_WHEN_DATA_EXISTS
:When there is data, an error is reported
When data_save_mode selects CUSTOM_PROCESSING, you should fill in the CUSTOM_SQL parameter. This parameter usually fills in a SQL that can be executed. SQL will be executed before synchronization tasks.
Enable upsert by primary_keys exist, If the task has no key duplicate data, setting this parameter to false
can speed up data import
Create the index(contains primary key and any other indexes) or not when auto-create table. You can use this option to improve the performance of jdbc writes when migrating large tables.
Notice: Note that this will sacrifice read performance, so you'll need to manually create indexes after the table migration to improve read performance
The write modes support five modes: SQL, COPY, COPY_SQL, MERGE, COPY_MERGE.
- SQL (default): The traditional SQL mode using JDBC, supporting both full and incremental writes.
- COPY: Import data using the COPY command (requires DB support such as Postgres), only supports full writes.
- COPY_SQL: Import data using the COPY command (requires DB support), and dynamically switch to SQL mode for writing if there is incremental data.
- MERGE: Import into a temporary table using the COPY command (requires DB support), and then MERGE into the target table (requires DB support), supporting both full and incremental writes.
- COPY_MERGE: Use the COPY command to import full data into the target table (requires DB support); if there is incremental data, dynamically switch to using the COPY command to import into a temporary table, and then MERGE into the target table (requires DB support), supporting both full and incremental writes.
NOTICE: when use MERGE/COPY_MERGE write mode, it will create a temporary table with the same structure as the target table automatically.
The temporary table name used in the MERGE/COPY_MERGE write mode. If not specified, the system will generate by origin table name with suffix _tmp
.
The temporary column used to batch write data in the MERGE/COPY_MERGE write mode. If not specified, the system will default to __st_batch_code
column.
The temporary column used to identify the type of data in the MERGE/COPY_MERGE write mode. If not specified, the system will default to __st_row_kind
column.
In the case of is_exactly_once = "true", Xa transactions are used. This requires database support, and some databases require some setup :
1 postgres needs to set max_prepared_transactions > 1
such as ALTER SYSTEM set max_prepared_transactions to 10
.
2 mysql version need >= 8.0.29
and Non-root users need to grant XA_RECOVER_ADMIN
permissions. such as grant XA_RECOVER_ADMIN on test_db.* to 'user1'@'%'
.
3 mysql can try to add rewriteBatchedStatements=true
parameter in url for better performance.
there are some reference value for params above.
datasource | driver | url | xa_data_source_class_name | maven |
---|---|---|---|---|
MySQL | com.mysql.cj.jdbc.Driver | jdbc:mysql://localhost:3306/test | com.mysql.cj.jdbc.MysqlXADataSource | https://mvnrepository.com/artifact/mysql/mysql-connector-java |
PostgreSQL | org.postgresql.Driver | jdbc:postgresql://localhost:5432/postgres | org.postgresql.xa.PGXADataSource | https://mvnrepository.com/artifact/org.postgresql/postgresql |
DM | dm.jdbc.driver.DmDriver | jdbc:dm://localhost:5236 | dm.jdbc.driver.DmdbXADataSource | https://mvnrepository.com/artifact/com.dameng/DmJdbcDriver18 |
Phoenix | org.apache.phoenix.queryserver.client.Driver | jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF | / | https://mvnrepository.com/artifact/com.aliyun.phoenix/ali-phoenix-shaded-thin-client |
SQL Server | com.microsoft.sqlserver.jdbc.SQLServerDriver | jdbc:sqlserver://localhost:1433 | com.microsoft.sqlserver.jdbc.SQLServerXADataSource | https://mvnrepository.com/artifact/com.microsoft.sqlserver/mssql-jdbc |
Oracle | oracle.jdbc.OracleDriver | jdbc:oracle:thin:@localhost:1521/xepdb1 | oracle.jdbc.xa.OracleXADataSource | https://mvnrepository.com/artifact/com.oracle.database.jdbc/ojdbc8 |
sqlite | org.sqlite.JDBC | jdbc:sqlite:test.db | / | https://mvnrepository.com/artifact/org.xerial/sqlite-jdbc |
GBase8a | com.gbase.jdbc.Driver | jdbc:gbase://e2e_gbase8aDb:5258/test | / | https://cdn.gbase.cn/products/30/p5CiVwXBKQYIUGN8ecHvk/gbase-connector-java-9.5.0.7-build1-bin.jar |
StarRocks | com.mysql.cj.jdbc.Driver | jdbc:mysql://localhost:3306/test | / | https://mvnrepository.com/artifact/mysql/mysql-connector-java |
db2 | com.ibm.db2.jcc.DB2Driver | jdbc:db2://localhost:50000/testdb | com.ibm.db2.jcc.DB2XADataSource | https://mvnrepository.com/artifact/com.ibm.db2.jcc/db2jcc/db2jcc4 |
saphana | com.sap.db.jdbc.Driver | jdbc:sap://localhost:39015 | / | https://mvnrepository.com/artifact/com.sap.cloud.db.jdbc/ngdbc |
Doris | com.mysql.cj.jdbc.Driver | jdbc:mysql://localhost:3306/test | / | https://mvnrepository.com/artifact/mysql/mysql-connector-java |
teradata | com.teradata.jdbc.TeraDriver | jdbc:teradata://localhost/DBS_PORT=1025,DATABASE=test | / | https://mvnrepository.com/artifact/com.teradata.jdbc/terajdbc |
Redshift | com.amazon.redshift.jdbc42.Driver | jdbc:redshift://localhost:5439/testdb | com.amazon.redshift.xa.RedshiftXADataSource | https://mvnrepository.com/artifact/com.amazon.redshift/redshift-jdbc42 |
Snowflake | net.snowflake.client.jdbc.SnowflakeDriver | jdbc:snowflake://<account_name>.snowflakecomputing.com | / | https://mvnrepository.com/artifact/net.snowflake/snowflake-jdbc |
Vertica | com.vertica.jdbc.Driver | jdbc:vertica://localhost:5433 | / | https://repo1.maven.org/maven2/com/vertica/jdbc/vertica-jdbc/12.0.3-0/vertica-jdbc-12.0.3-0.jar |
Kingbase | com.kingbase8.Driver | jdbc:kingbase8://localhost:54321/db_test | / | https://repo1.maven.org/maven2/cn/com/kingbase/kingbase8/8.6.0/kingbase8-8.6.0.jar |
OceanBase | com.oceanbase.jdbc.Driver | jdbc:oceanbase://localhost:2881 | / | https://repo1.maven.org/maven2/com/oceanbase/oceanbase-client/2.4.12/oceanbase-client-2.4.12.jar |
xugu | com.xugu.cloudjdbc.Driver | jdbc:xugu://localhost:5138 | / | https://repo1.maven.org/maven2/com/xugudb/xugu-jdbc/12.2.0/xugu-jdbc-12.2.0.jar |
InterSystems IRIS | com.intersystems.jdbc.IRISDriver | jdbc:IRIS://localhost:1972/%SYS | / | https://raw.githubusercontent.com/intersystems-community/iris-driver-distribution/main/JDBC/JDK18/intersystems-jdbc-3.8.4.jar |
opengauss | org.opengauss.Driver | jdbc:opengauss://localhost:5432/postgres | / | https://repo1.maven.org/maven2/org/opengauss/opengauss-jdbc/5.1.0-og/opengauss-jdbc-5.1.0-og.jar |
Highgo | com.highgo.jdbc.Driver | jdbc:highgo://localhost:5866/highgo | / | https://repo1.maven.org/maven2/com/highgo/HgdbJdbc/6.2.3/HgdbJdbc-6.2.3.jar |
Simple
jdbc {
url = "jdbc:mysql://localhost:3306/test"
driver = "com.mysql.cj.jdbc.Driver"
user = "root"
password = "123456"
query = "insert into test_table(name,age) values(?,?)"
}
Exactly-once
Turn on exact one-time semantics by setting is_exactly_once
jdbc {
url = "jdbc:mysql://localhost:3306/test"
driver = "com.mysql.cj.jdbc.Driver"
max_retries = 0
user = "root"
password = "123456"
query = "insert into test_table(name,age) values(?,?)"
is_exactly_once = "true"
xa_data_source_class_name = "com.mysql.cj.jdbc.MysqlXADataSource"
}
CDC(Change data capture) event
jdbc receive CDC example
sink {
jdbc {
url = "jdbc:mysql://localhost:3306"
driver = "com.mysql.cj.jdbc.Driver"
user = "root"
password = "123456"
database = "sink_database"
table = "sink_table"
primary_keys = ["key1", "key2", ...]
}
}
Add saveMode function
To facilitate the creation of tables when they do not already exist, set the schema_save_mode
to CREATE_SCHEMA_WHEN_NOT_EXIST
.
sink {
jdbc {
url = "jdbc:mysql://localhost:3306"
driver = "com.mysql.cj.jdbc.Driver"
user = "root"
password = "123456"
generate_sink_sql = "true"
database = "sink_database"
table = "sink_table"
primary_keys = ["key1", "key2", ...]
schema_save_mode = "CREATE_SCHEMA_WHEN_NOT_EXIST"
data_save_mode="APPEND_DATA"
}
}
Postgresql 9.5 version below support CDC(Change data capture) event
For PostgreSQL versions 9.5 and below, setting compatible_mode
to postgresLow
to enable support for PostgreSQL Change Data Capture (CDC) operations.
sink {
jdbc {
url = "jdbc:postgresql://localhost:5432"
driver = "org.postgresql.Driver"
user = "root"
password = "123456"
compatible_mode="postgresLow"
database = "sink_database"
table = "sink_table"
support_upsert_by_query_primary_key_exist = true
generate_sink_sql = true
primary_keys = ["key1", "key2", ...]
}
}
env {
parallelism = 1
job.mode = "STREAMING"
checkpoint.interval = 5000
}
source {
Mysql-CDC {
base-url = "jdbc:mysql://127.0.0.1:3306/seatunnel"
username = "root"
password = "******"
table-names = ["seatunnel.role","seatunnel.user","galileo.Bucket"]
}
}
transform {
}
sink {
jdbc {
url = "jdbc:mysql://localhost:3306"
driver = "com.mysql.cj.jdbc.Driver"
user = "root"
password = "123456"
generate_sink_sql = true
database = "${database_name}_test"
table = "${table_name}_test"
primary_keys = ["${primary_key}"]
}
}
env {
parallelism = 1
job.mode = "BATCH"
}
source {
Jdbc {
driver = oracle.jdbc.driver.OracleDriver
url = "jdbc:oracle:thin:@localhost:1521/XE"
user = testUser
password = testPassword
table_list = [
{
table_path = "TESTSCHEMA.TABLE_1"
},
{
table_path = "TESTSCHEMA.TABLE_2"
}
]
}
}
transform {
}
sink {
jdbc {
url = "jdbc:mysql://localhost:3306"
driver = "com.mysql.cj.jdbc.Driver"
user = "root"
password = "123456"
generate_sink_sql = true
database = "${schema_name}_test"
table = "${table_name}_test"
primary_keys = ["${primary_key}"]
}
}