You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* change PrimaryKey table to Primary Key Table across pages
* syntactic fixes
* make some more minor fixes
* fix broken link
* address yuxia's comments
| type | required | (none) | Catalog type, must to be 'fluss' here. |
44
+
| type | required | (none) | Catalog type, must be 'fluss' here. |
45
45
| bootstrap.servers | required | (none) | Comma separated list of Fluss servers. |
46
46
| default-database | optional | fluss | The default database to use when switching to this catalog. |
47
47
| client.security.protocol | optional | PLAINTEXT | The security protocol used to communicate with brokers. Currently, only `PLAINTEXT` and `SASL` are supported, the configuration value is case insensitive. |
48
-
|`client.security.{protocol}.*`| optional | (none) | Client-side configuration properties for a specific authentication protocol. E.g., client.security.sasl.jaas.config. More Details in [authentication](../security/authentication.md)| (none) |
48
+
|`client.security.{protocol}.*`| optional | (none) | Client-side configuration properties for a specific authentication protocol. E.g., client.security.sasl.jaas.config. More Details in [authentication](../security/authentication.md)|
49
49
50
-
The following introduced statements assuming the current catalog is switched to the Fluss catalog using `USE CATALOG <catalog_name>` statement.
50
+
The following statements assume that the current catalog has been switched to the Fluss catalog using the`USE CATALOG <catalog_name>` statement.
51
51
52
52
## Create Database
53
53
54
-
By default, FlussCatalog will use the `fluss` database in Flink. Using the following example to create a separate database in order to avoid creating tables under the default `fluss` database:
54
+
By default, FlussCatalog will use the `fluss` database in Flink. You can use the following example to create a separate database to avoid creating tables under the default `fluss` database:
55
55
56
56
```sql title="Flink SQL"
57
57
CREATEDATABASEmy_db;
@@ -77,9 +77,9 @@ DROP DATABASE my_db;
77
77
78
78
## Create Table
79
79
80
-
### PrimaryKey Table
80
+
### Primary Key Table
81
81
82
-
The following SQL statement will create a [PrimaryKey Table](table-design/table-types/pk-table/index.md) with a primary key consisting of shop_id and user_id.
82
+
The following SQL statement will create a [Primary Key Table](table-design/table-types/pk-table/index.md) with a primary key consisting of shop_id and user_id.
83
83
```sql title="Flink SQL"
84
84
CREATETABLEmy_pk_table (
85
85
shop_id BIGINT,
@@ -107,14 +107,14 @@ CREATE TABLE my_log_table (
107
107
);
108
108
```
109
109
110
-
### Partitioned (PrimaryKey/Log) Table
110
+
### Partitioned (Primary Key/Log) Table
111
111
112
112
:::note
113
113
1. Currently, Fluss only supports partitioned field with `STRING` type
114
-
2. For the Partitioned PrimaryKey Table, the partitioned field (`dt` in this case) must be a subset of the primary key (`dt, shop_id, user_id` in this case)
114
+
2. For the Partitioned Primary Key Table, the partitioned field (`dt` in this case) must be a subset of the primary key (`dt, shop_id, user_id` in this case)
115
115
:::
116
116
117
-
The following SQL statement creates a Partitioned PrimaryKey Table in Fluss.
117
+
The following SQL statement creates a Partitioned Primary Key Table in Fluss.
118
118
119
119
```sql title="Flink SQL"
120
120
CREATETABLEmy_part_pk_table (
@@ -147,7 +147,7 @@ But you can still use the [Add Partition](engine-flink/ddl.md#add-partition) sta
147
147
148
148
#### Multi-Fields Partitioned Table
149
149
150
-
Fluss also support[Multi-Fields Partitioning](table-design/data-distribution/partitioning.md#multi-field-partitioned-tables), the following SQL statement creates a Multi-Fields Partitioned Log Table in Fluss:
150
+
Fluss also supports[Multi-Fields Partitioning](table-design/data-distribution/partitioning.md#multi-field-partitioned-tables), the following SQL statement creates a Multi-Fields Partitioned Log Table in Fluss:
Fluss also support creat Auto Partitioned (PrimaryKey/Log) Table. The following SQL statement creates an Auto Partitioned PrimaryKey Table in Fluss.
165
+
Fluss also supports creating Auto Partitioned (Primary Key/Log) Table. The following SQL statement creates an Auto Partitioned Primary Key Table in Fluss.
For more details about Auto Partitioned (PrimaryKey/Log) Table, refer to [Auto Partitioning](table-design/data-distribution/partitioning.md#auto-partitioning).
198
+
For more details about Auto Partitioned (Primary Key/Log) Table, refer to [Auto Partitioning](table-design/data-distribution/partitioning.md#auto-partitioning).
199
199
200
200
201
201
### Options
@@ -240,8 +240,8 @@ This will entirely remove all the data of the table in the Fluss cluster.
240
240
241
241
## Add Partition
242
242
243
-
Fluss support manually add partitions to an exists partitioned table by Fluss Catalog. If the specified partition
244
-
not exists, Fluss will create the partition. If the specified partition already exists, Fluss will ignore the request
243
+
Fluss supports manually adding partitions to an existing partitioned table through the Fluss Catalog. If the specified partition
244
+
does not exist, Fluss will create the partition. If the specified partition already exists, Fluss will ignore the request
245
245
or throw an exception.
246
246
247
247
To add partitions, run:
@@ -277,8 +277,8 @@ For more details, refer to the [Flink SHOW PARTITIONS](https://nightlies.apache.
277
277
278
278
## Drop Partition
279
279
280
-
Fluss also support manually drop partitions from an exists partitioned table by Fluss Catalog. If the specified partition
281
-
not exists, Fluss will ignore the request or throw an exception.
280
+
Fluss also supports manually dropping partitions from an existing partitioned table through the Fluss Catalog. If the specified partition
281
+
does not exist, Fluss will ignore the request or throw an exception.
282
282
283
283
284
284
To drop partitions, run:
@@ -291,5 +291,3 @@ ALTER TABLE my_multi_fields_part_log_table DROP PARTITION (dt = '2025-03-05', na
291
291
```
292
292
293
293
For more details, refer to the [Flink ALTER TABLE(DROP)](https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/dev/table/sql/alter/#drop) documentation.
Copy file name to clipboardExpand all lines: website/docs/intro.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ Fluss is a streaming storage built for real-time analytics which can serve as th
28
28
29
29

30
30
31
-
It bridges the gap between **streaming data** and the data **Lakehouse** by enabling low-latency, high-throughput data ingestion and processing while seamlessly integrating with popular compute engines like **Apache Flink**, while**Apache Spark**, and **StarRocks** are coming soon.
31
+
It bridges the gap between **streaming data** and the data **Lakehouse** by enabling low-latency, high-throughput data ingestion and processing while seamlessly integrating with popular compute engines like **Apache Flink**, with**Apache Spark** and **StarRocks** coming soon.
32
32
33
33
Fluss supports `streaming reads` and `writes` with sub-second latency and stores data in a columnar format, enhancing query performance and reducing storage costs.
34
34
It offers flexible table types, including append-only **Log Tables** and updatable **PrimaryKey Tables**, to accommodate diverse real-time analytics and processing needs.
@@ -46,7 +46,7 @@ The following is a list of (but not limited to) use-cases that Fluss shines ✨:
Copy file name to clipboardExpand all lines: website/docs/table-design/overview.md
+7-9Lines changed: 7 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,13 +34,13 @@ Tables are classified into two types based on the presence of a primary key:
34
34
-**Log Tables:**
35
35
- Designed for append-only scenarios.
36
36
- Support only INSERT operations.
37
-
-**PrimaryKey Tables:**
37
+
-**Primary Key Tables:**
38
38
- Used for updating and managing data in business databases.
39
39
- Support INSERT, UPDATE, and DELETE operations based on the defined primary key.
40
40
41
-
A Table becomes a [Partitioned Table](table-design/data-distribution/partitioning.md) when a partition column is defined. Data with the same partition value is stored in the same partition. Partition columns can be applied to both Log Tables and PrimaryKey Tables, but with specific considerations:
41
+
A Table becomes a [Partitioned Table](data-distribution/partitioning.md) when a partition column is defined. Data with the same partition value is stored in the same partition. Partition columns can be applied to both Log Tables and Primary Key Tables, but with specific considerations:
42
42
-**For Log Tables**, partitioning is commonly used for log data, typically based on date columns, to facilitate data separation and cleaning.
43
-
-**For PrimaryKey Tables**, the partition column must be a subset of the primary key to ensure uniqueness.
43
+
-**For Primary Key Tables**, the partition column must be a subset of the primary key to ensure uniqueness.
44
44
45
45
This design ensures efficient data organization, flexibility in handling different use cases, and adherence to data integrity constraints.
46
46
@@ -60,14 +60,12 @@ The number of buckets `N` can be configured per table. A bucket is the smallest
60
60
The data of a bucket consists of a LogTablet and a (optional) KvTablet.
61
61
62
62
### LogTablet
63
-
A **LogTablet** needs to be generated for each bucket of Log and PrimaryKey tables.
64
-
For Log Tables, the LogTablet is both the primary table data and the log data. For PrimaryKey tables, the LogTablet acts
63
+
A **LogTablet** needs to be generated for each bucket of Log and Primary Key Tables.
64
+
For Log Tables, the LogTablet is both the primary table data and the log data. For Primary Key Tables, the LogTablet acts
65
65
as the log data for the primary table data.
66
66
-**Segment:** The smallest unit of log storage in the **LogTablet**. A segment consists of an **.index** file and a **.log** data file.
67
-
-**.index:** An `offset sparse index` that stores the mappings between the physical byte address in the message relative offset -> .log file.
67
+
-**.index:** An `offset sparse index` that maps message relative offsets to their corresponding physical byte addresses in the .log file.
68
68
-**.log:** Compact arrangement of log data.
69
69
70
70
### KvTablet
71
-
Each bucket of the PrimaryKey table needs to generate a KvTablet. Underlying, each KvTablet corresponds to an embedded RocksDB instance. RocksDB is an LSM (log structured merge) engine which helps KvTablet supports high-performance updates and lookup query.
72
-
73
-
71
+
Each bucket of the Primary Key Table needs to generate a KvTablet. Underlying, each KvTablet corresponds to an embedded RocksDB instance. RocksDB is an LSM (log structured merge) engine which helps KvTablet support high-performance updates and lookup queries.
Copy file name to clipboardExpand all lines: website/docs/table-design/table-types/log-table.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -62,7 +62,7 @@ Log Tables in Fluss allow real-time data consumption, preserving the order of da
62
62
## Column Pruning
63
63
64
64
Column pruning is a technique used to reduce the amount of data that needs to be read from storage by eliminating unnecessary columns from the query.
65
-
Fluss supports column pruning for Log Tables and the changelog of PrimaryKey Tables, which can significantly improve query performance by reducing the amount of data that needs to be read from storage and lowering networking costs.
65
+
Fluss supports column pruning for Log Tables and the changelog of Primary Key Tables, which can significantly improve query performance by reducing the amount of data that needs to be read from storage and lowering networking costs.
66
66
67
67
What sets Fluss apart is its ability to apply **column pruning during streaming reads**, a capability that is both unique and industry-leading. This ensures that even in real-time streaming scenarios, only the required columns are processed, minimizing resource usage and maximizing efficiency.
68
68
@@ -90,7 +90,7 @@ Additionally, compression is applied to each column independently, preserving th
90
90
91
91
When compression is enabled:
92
92
- For **Log Tables**, data is compressed by the writer on the client side, written in a compressed format, and decompressed by the log scanner on the client side.
93
-
- For **PrimaryKey Table changelogs**, compression is performed server-side since the changelog is generated on the server.
93
+
- For **Primary Key Table changelogs**, compression is performed server-side since the changelog is generated on the server.
94
94
95
95
Log compression significantly reduces networking and storage costs. Benchmark results demonstrate that using the ZSTD compression with level 3 achieves a compression ratio of approximately **5x** (e.g., reducing 5GB of data to 1GB).
96
96
Furthermore, read/write throughput improves substantially due to reduced networking overhead.
@@ -133,4 +133,4 @@ In the above example, we set the compression codec to `LZ4_FRAME` and the compre
133
133
:::
134
134
135
135
## Log Tiering
136
-
Log Table supports tiering data to different storage tiers. See more details about [Remote Log](maintenance/tiered-storage/remote-storage.md).
136
+
Log Table supports tiering data to different storage tiers. See more details about [Remote Log](maintenance/tiered-storage/remote-storage.md).
Copy file name to clipboardExpand all lines: website/docs/table-design/table-types/pk-table/index.md
+17-17Lines changed: 17 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: PrimaryKey Table
2
+
title: Primary Key Table
3
3
sidebar_position: 1
4
4
---
5
5
@@ -21,15 +21,15 @@ sidebar_position: 1
21
21
limitations under the License.
22
22
-->
23
23
24
-
# PrimaryKey Table
24
+
# Primary Key Table
25
25
26
26
## Basic Concept
27
27
28
-
PrimaryKey Table in Fluss ensure the uniqueness of the specified primary key and supports `INSERT`, `UPDATE`,
28
+
Primary Key Table in Fluss ensures the uniqueness of the specified primary key and supports `INSERT`, `UPDATE`,
29
29
and `DELETE` operations.
30
30
31
-
A PrimaryKey Table is created by specifying a `PRIMARY KEY` clause in the `CREATE TABLE` statement. For example, the
32
-
following Flink SQL statement creates a PrimaryKey Table with `shop_id` and `user_id` as the primary key and distributes
31
+
A Primary Key Table is created by specifying a `PRIMARY KEY` clause in the `CREATE TABLE` statement. For example, the
32
+
following Flink SQL statement creates a Primary Key Table with `shop_id` and `user_id` as the primary key and distributes
33
33
the data into 4 buckets:
34
34
35
35
```sql title="Flink SQL"
@@ -49,13 +49,13 @@ In Fluss primary key table, each row of data has a unique primary key.
49
49
If multiple entries with the same primary key are written to the Fluss primary key table, only the last entry will be
50
50
retained.
51
51
52
-
For [Partitioned PrimaryKey Table](table-design/data-distribution/partitioning.md), the primary key must contain the
52
+
For [Partitioned Primary Key Table](table-design/data-distribution/partitioning.md), the primary key must contain the
53
53
partition key.
54
54
55
55
## Bucket Assigning
56
56
57
57
For primary key tables, Fluss always determines which bucket the data belongs to based on the hash value of the bucket
58
-
key (It must be a subset of the primary keys excluding partition keys of the primary key table) for each record. If the bucket key is not specified, the bucket key will used as the primary key (excluding the partition key).
58
+
key (It must be a subset of the primary keys excluding partition keys of the primary key table) for each record. If the bucket key is not specified, the bucket key will be used as the primary key (excluding the partition key).
59
59
Data with the same hash value will be distributed to the same bucket.
60
60
61
61
## Partial Update
@@ -94,20 +94,20 @@ follows:
94
94
95
95
## Merge Engines
96
96
97
-
The **Merge Engine** in Fluss is a core component designed to efficiently handle and consolidate data updates for PrimaryKey Tables.
97
+
The **Merge Engine** in Fluss is a core component designed to efficiently handle and consolidate data updates for Primary Key Tables.
98
98
It offers users the flexibility to define how incoming data records are merged with existing records sharing the same primary key.
99
-
However, users can specify a different merge engine to customize the merging behavior according to their specific use cases
99
+
However, users can specify a different merge engine to customize the merging behavior according to their specific use cases.
Fluss will capture the changes when inserting, updating, deleting records on the primary-key table, which is known as
110
+
Fluss will capture the changes when inserting, updating, deleting records on the Primary Key Table, which is known as
111
111
the changelog. Downstream consumers can directly consume the changelog to obtain the changes in the table. For example,
112
112
consider the following primary key table in Fluss:
113
113
@@ -121,7 +121,7 @@ CREATE TABLE T
121
121
);
122
122
```
123
123
124
-
If the data written to the primary-key table is
124
+
If the data written to the Primary Key Table is
125
125
sequentially `+I(1, 2.0, 'apple')`, `+I(1, 4.0, 'banana')`, `-D(1, 4.0, 'banana')`, then the following change data will
126
126
be generated. For example, the following Flink SQL statements illustrate this behavior:
127
127
@@ -164,13 +164,13 @@ For primary key tables, Fluss supports various kinds of querying abilities.
164
164
For a primary key table, the default read method is a full snapshot followed by incremental data. First, the
165
165
snapshot data of the table is consumed, followed by the changelog data of the table.
166
166
167
-
It is also possible to only consume the changelog data of the table. For more details, please refer to the [Flink Reads](engine-flink/reads.md)
167
+
It is also possible to only consume the changelog data of the table. For more details, please refer to the [Flink Reads](../../../engine-flink/reads.md)
168
168
169
169
### Lookup
170
170
171
-
Fluss primary key table can lookup data by the primary keys. If the key exists in Fluss, lookup will return a unique row. it always used in [Flink Lookup Join](engine-flink/lookups.md#lookup).
171
+
Fluss primary key table can lookup data by the primary keys. If the key exists in Fluss, lookup will return a unique row. It is always used in [Flink Lookup Join](../../../engine-flink/lookups.md#lookup).
172
172
173
173
### Prefix Lookup
174
174
175
175
Fluss primary key table can also do prefix lookup by the prefix subset primary keys. Unlike lookup, prefix lookup
176
-
will scan data based on the prefix of primary keys and may return multiple rows. It always used in [Flink Prefix Lookup Join](engine-flink/lookups.md#prefix-lookup).
176
+
will scan data based on the prefix of primary keys and may return multiple rows. It is always used in [Flink Prefix Lookup Join](../../../engine-flink/lookups.md#prefix-lookup).
0 commit comments