You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: website/docs/engine-flink/datastream.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ sidebar_position: 6
7
7
## Overview
8
8
The Fluss DataStream Connector for Apache Flink provides a Flink DataStream source implementation for reading data from Fluss tables and a Flink DataStream sink implementation for writing data to Fluss tables. It allows you to seamlessly integrate Fluss tables with Flink's DataStream API, enabling you to process data from Fluss in your Flink applications.
9
9
10
-
Key features of the Fluss Datastream Connector include:
10
+
Key features of the Fluss DataStream Connector include:
11
11
* Reading from both primary key tables and log tables
12
12
* Support for projection pushdown to select specific fields
@@ -43,7 +43,7 @@ For Flink's DataStream API, you can see [DataStream API](docs/engine-flink/datas
43
43
## Preparation when using Flink SQL Client
44
44
-**Download Flink**
45
45
46
-
Flink runs on all UNIX-like environments, i.e. Linux, Mac OS X, and Cygwin (for Windows).
46
+
Flink runs on all UNIX-like environments, i.e., Linux, Mac OS X, and Cygwin (for Windows).
47
47
If you haven’t downloaded Flink, you can download [the binary release](https://flink.apache.org/downloads.html) of Flink, then extract the archive with the following command.
48
48
```shell
49
49
tar -xzf flink-1.20.1-bin-scala_2.12.tgz
@@ -70,7 +70,7 @@ You should be able to navigate to the web UI at [localhost:8081](http://localhos
70
70
```shell
71
71
ps aux | grep flink
72
72
```
73
-
-**Start a sql client**
73
+
-**Start a SQL Client**
74
74
75
75
To quickly stop the cluster and all running components, you can use the provided script:
76
76
```shell
@@ -92,7 +92,7 @@ CREATE CATALOG fluss_catalog WITH (
92
92
you should start the Fluss server first. See [Deploying Fluss](install-deploy/overview.md#how-to-deploy-fluss)
93
93
for how to build a Fluss cluster.
94
94
Here, it is assumed that there is a Fluss cluster running on your local machine and the CoordinatorServer port is 9123.
95
-
2. The`bootstrap.servers` configuration is used to discover all nodes within the Fluss cluster. It can be set with one or more (up to three) Fluss server addresses (either CoordinatorServer or TabletServer) separated by commas.
95
+
2. The`bootstrap.servers` configuration is used to discover all nodes within the Fluss cluster. It can be set with one or more (up to three) Fluss server addresses (either CoordinatorServer or TabletServer) separated by commas.
Copy file name to clipboardExpand all lines: website/docs/engine-flink/reads.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,12 +7,12 @@ sidebar_position: 4
7
7
# Flink Reads
8
8
Fluss supports streaming and batch read with [Apache Flink](https://flink.apache.org/)'s SQL & Table API. Execute the following SQL command to switch execution mode from streaming to batch, and vice versa:
9
9
```sql title="Flink SQL"
10
-
-- Execute the flink job in streaming mode for current session context
10
+
-- Execute the Flink job in streaming mode for current session context
11
11
SET'execution.runtime-mode'='streaming';
12
12
```
13
13
14
14
```sql title="Flink SQL"
15
-
-- Execute the flink job in batch mode for current session context
15
+
-- Execute the Flink job in batch mode for current session context
Copy file name to clipboardExpand all lines: website/docs/engine-flink/writes.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ Fluss primary key tables can accept all types of messages (`INSERT`, `UPDATE_BEF
15
15
They support both streaming and batch modes and are compatible with primary-key tables (for upserting data) as well as log tables (for appending data).
16
16
17
17
### Appending Data to the Log Table
18
-
#### Create a Log table.
18
+
#### Create a Log Table.
19
19
```sql title="Flink SQL"
20
20
CREATETABLElog_table (
21
21
order_id BIGINT,
@@ -25,7 +25,7 @@ CREATE TABLE log_table (
25
25
);
26
26
```
27
27
28
-
#### Insert data into the Log table.
28
+
#### Insert Data into the Log Table.
29
29
```sql title="Flink SQL"
30
30
CREATE TEMPORARY TABLE source (
31
31
order_id BIGINT,
@@ -91,15 +91,15 @@ SELECT shop_id, user_id, num_orders FROM source;
91
91
92
92
Fluss supports deleting data for primary-key tables in batch mode via `DELETE FROM` statement. Currently, only single data deletions based on the primary key are supported.
93
93
94
-
* the primary key table
94
+
* the Primary Key Table
95
95
```sql title="Flink SQL"
96
96
-- DELETE statement requires batch mode
97
97
SET'execution.runtime-mode'='batch';
98
98
```
99
99
100
100
```sql title="Flink SQL"
101
101
-- The condition must include all primary key equality conditions.
102
-
DELETEFROM pk_table WHERE shop_id =10000and user_id =123456;
102
+
DELETEFROM pk_table WHERE shop_id =10000AND user_id =123456;
103
103
```
104
104
105
105
## UPDATE
@@ -112,5 +112,5 @@ SET execution.runtime-mode = batch;
112
112
113
113
```sql title="Flink SQL"
114
114
-- The condition must include all primary key equality conditions.
115
-
UPDATE pk_table SET total_amount =2WHERE shop_id =10000and user_id =123456;
115
+
UPDATE pk_table SET total_amount =2WHERE shop_id =10000AND user_id =123456;
0 commit comments