Skip to content

Commit db2af5c

Browse files
authored
[Docs] consistency & syntax fixes (#1243)
* change PrimaryKey table to Primary Key Table across pages * syntactic fixes * make some more minor fixes * fix broken link * address yuxia's comments
1 parent 4eb3d90 commit db2af5c

File tree

8 files changed

+52
-56
lines changed

8 files changed

+52
-56
lines changed

website/docs/engine-flink/ddl.md

Lines changed: 17 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -41,17 +41,17 @@ The following properties can be set if using the Fluss catalog:
4141

4242
| Option | Required | Default | Description |
4343
|--------------------------------|----------|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
44-
| type | required | (none) | Catalog type, must to be 'fluss' here. |
44+
| type | required | (none) | Catalog type, must be 'fluss' here. |
4545
| bootstrap.servers | required | (none) | Comma separated list of Fluss servers. |
4646
| default-database | optional | fluss | The default database to use when switching to this catalog. |
4747
| client.security.protocol | optional | PLAINTEXT | The security protocol used to communicate with brokers. Currently, only `PLAINTEXT` and `SASL` are supported, the configuration value is case insensitive. |
48-
| `client.security.{protocol}.*` | optional | (none) | Client-side configuration properties for a specific authentication protocol. E.g., client.security.sasl.jaas.config. More Details in [authentication](../security/authentication.md) | (none) |
48+
| `client.security.{protocol}.*` | optional | (none) | Client-side configuration properties for a specific authentication protocol. E.g., client.security.sasl.jaas.config. More Details in [authentication](../security/authentication.md) |
4949

50-
The following introduced statements assuming the current catalog is switched to the Fluss catalog using `USE CATALOG <catalog_name>` statement.
50+
The following statements assume that the current catalog has been switched to the Fluss catalog using the `USE CATALOG <catalog_name>` statement.
5151

5252
## Create Database
5353

54-
By default, FlussCatalog will use the `fluss` database in Flink. Using the following example to create a separate database in order to avoid creating tables under the default `fluss` database:
54+
By default, FlussCatalog will use the `fluss` database in Flink. You can use the following example to create a separate database to avoid creating tables under the default `fluss` database:
5555

5656
```sql title="Flink SQL"
5757
CREATE DATABASE my_db;
@@ -77,9 +77,9 @@ DROP DATABASE my_db;
7777

7878
## Create Table
7979

80-
### PrimaryKey Table
80+
### Primary Key Table
8181

82-
The following SQL statement will create a [PrimaryKey Table](table-design/table-types/pk-table/index.md) with a primary key consisting of shop_id and user_id.
82+
The following SQL statement will create a [Primary Key Table](table-design/table-types/pk-table/index.md) with a primary key consisting of shop_id and user_id.
8383
```sql title="Flink SQL"
8484
CREATE TABLE my_pk_table (
8585
shop_id BIGINT,
@@ -107,14 +107,14 @@ CREATE TABLE my_log_table (
107107
);
108108
```
109109

110-
### Partitioned (PrimaryKey/Log) Table
110+
### Partitioned (Primary Key/Log) Table
111111

112112
:::note
113113
1. Currently, Fluss only supports partitioned field with `STRING` type
114-
2. For the Partitioned PrimaryKey Table, the partitioned field (`dt` in this case) must be a subset of the primary key (`dt, shop_id, user_id` in this case)
114+
2. For the Partitioned Primary Key Table, the partitioned field (`dt` in this case) must be a subset of the primary key (`dt, shop_id, user_id` in this case)
115115
:::
116116

117-
The following SQL statement creates a Partitioned PrimaryKey Table in Fluss.
117+
The following SQL statement creates a Partitioned Primary Key Table in Fluss.
118118

119119
```sql title="Flink SQL"
120120
CREATE TABLE my_part_pk_table (
@@ -147,7 +147,7 @@ But you can still use the [Add Partition](engine-flink/ddl.md#add-partition) sta
147147

148148
#### Multi-Fields Partitioned Table
149149

150-
Fluss also support [Multi-Fields Partitioning](table-design/data-distribution/partitioning.md#multi-field-partitioned-tables), the following SQL statement creates a Multi-Fields Partitioned Log Table in Fluss:
150+
Fluss also supports [Multi-Fields Partitioning](table-design/data-distribution/partitioning.md#multi-field-partitioned-tables), the following SQL statement creates a Multi-Fields Partitioned Log Table in Fluss:
151151

152152
```sql title="Flink SQL"
153153
CREATE TABLE my_multi_fields_part_log_table (
@@ -160,9 +160,9 @@ CREATE TABLE my_multi_fields_part_log_table (
160160
) PARTITIONED BY (dt, nation);
161161
```
162162

163-
#### Auto partitioned (PrimaryKey/Log) table
163+
#### Auto Partitioned (Primary Key/Log) Table
164164

165-
Fluss also support creat Auto Partitioned (PrimaryKey/Log) Table. The following SQL statement creates an Auto Partitioned PrimaryKey Table in Fluss.
165+
Fluss also supports creating Auto Partitioned (Primary Key/Log) Table. The following SQL statement creates an Auto Partitioned Primary Key Table in Fluss.
166166

167167
```sql title="Flink SQL"
168168
CREATE TABLE my_auto_part_pk_table (
@@ -195,7 +195,7 @@ CREATE TABLE my_auto_part_log_table (
195195
);
196196
```
197197

198-
For more details about Auto Partitioned (PrimaryKey/Log) Table, refer to [Auto Partitioning](table-design/data-distribution/partitioning.md#auto-partitioning).
198+
For more details about Auto Partitioned (Primary Key/Log) Table, refer to [Auto Partitioning](table-design/data-distribution/partitioning.md#auto-partitioning).
199199

200200

201201
### Options
@@ -240,8 +240,8 @@ This will entirely remove all the data of the table in the Fluss cluster.
240240

241241
## Add Partition
242242

243-
Fluss support manually add partitions to an exists partitioned table by Fluss Catalog. If the specified partition
244-
not exists, Fluss will create the partition. If the specified partition already exists, Fluss will ignore the request
243+
Fluss supports manually adding partitions to an existing partitioned table through the Fluss Catalog. If the specified partition
244+
does not exist, Fluss will create the partition. If the specified partition already exists, Fluss will ignore the request
245245
or throw an exception.
246246

247247
To add partitions, run:
@@ -277,8 +277,8 @@ For more details, refer to the [Flink SHOW PARTITIONS](https://nightlies.apache.
277277

278278
## Drop Partition
279279

280-
Fluss also support manually drop partitions from an exists partitioned table by Fluss Catalog. If the specified partition
281-
not exists, Fluss will ignore the request or throw an exception.
280+
Fluss also supports manually dropping partitions from an existing partitioned table through the Fluss Catalog. If the specified partition
281+
does not exist, Fluss will ignore the request or throw an exception.
282282

283283

284284
To drop partitions, run:
@@ -291,5 +291,3 @@ ALTER TABLE my_multi_fields_part_log_table DROP PARTITION (dt = '2025-03-05', na
291291
```
292292

293293
For more details, refer to the [Flink ALTER TABLE(DROP)](https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/dev/table/sql/alter/#drop) documentation.
294-
295-

website/docs/intro.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Fluss is a streaming storage built for real-time analytics which can serve as th
2828

2929
![arch](/img/fluss.png)
3030

31-
It bridges the gap between **streaming data** and the data **Lakehouse** by enabling low-latency, high-throughput data ingestion and processing while seamlessly integrating with popular compute engines like **Apache Flink**, while **Apache Spark**, and **StarRocks** are coming soon.
31+
It bridges the gap between **streaming data** and the data **Lakehouse** by enabling low-latency, high-throughput data ingestion and processing while seamlessly integrating with popular compute engines like **Apache Flink**, with **Apache Spark** and **StarRocks** coming soon.
3232

3333
Fluss supports `streaming reads` and `writes` with sub-second latency and stores data in a columnar format, enhancing query performance and reducing storage costs.
3434
It offers flexible table types, including append-only **Log Tables** and updatable **PrimaryKey Tables**, to accommodate diverse real-time analytics and processing needs.
@@ -46,7 +46,7 @@ The following is a list of (but not limited to) use-cases that Fluss shines ✨:
4646
* **📡 Real-time IoT Pipelines**
4747
* **🚓 Real-time Fraud Detection**
4848
* **🚨 Real-time Alerting Systems**
49-
* **💫 Real-tim ETL/Data Warehouses**
49+
* **💫 Real-time ETL/Data Warehouses**
5050
* **🌐 Real-time Geolocation Services**
5151
* **🚚 Real-time Shipment Update Tracking**
5252

website/docs/table-design/overview.md

Lines changed: 7 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -34,13 +34,13 @@ Tables are classified into two types based on the presence of a primary key:
3434
- **Log Tables:**
3535
- Designed for append-only scenarios.
3636
- Support only INSERT operations.
37-
- **PrimaryKey Tables:**
37+
- **Primary Key Tables:**
3838
- Used for updating and managing data in business databases.
3939
- Support INSERT, UPDATE, and DELETE operations based on the defined primary key.
4040

41-
A Table becomes a [Partitioned Table](table-design/data-distribution/partitioning.md) when a partition column is defined. Data with the same partition value is stored in the same partition. Partition columns can be applied to both Log Tables and PrimaryKey Tables, but with specific considerations:
41+
A Table becomes a [Partitioned Table](data-distribution/partitioning.md) when a partition column is defined. Data with the same partition value is stored in the same partition. Partition columns can be applied to both Log Tables and Primary Key Tables, but with specific considerations:
4242
- **For Log Tables**, partitioning is commonly used for log data, typically based on date columns, to facilitate data separation and cleaning.
43-
- **For PrimaryKey Tables**, the partition column must be a subset of the primary key to ensure uniqueness.
43+
- **For Primary Key Tables**, the partition column must be a subset of the primary key to ensure uniqueness.
4444

4545
This design ensures efficient data organization, flexibility in handling different use cases, and adherence to data integrity constraints.
4646

@@ -60,14 +60,12 @@ The number of buckets `N` can be configured per table. A bucket is the smallest
6060
The data of a bucket consists of a LogTablet and a (optional) KvTablet.
6161

6262
### LogTablet
63-
A **LogTablet** needs to be generated for each bucket of Log and PrimaryKey tables.
64-
For Log Tables, the LogTablet is both the primary table data and the log data. For PrimaryKey tables, the LogTablet acts
63+
A **LogTablet** needs to be generated for each bucket of Log and Primary Key Tables.
64+
For Log Tables, the LogTablet is both the primary table data and the log data. For Primary Key Tables, the LogTablet acts
6565
as the log data for the primary table data.
6666
- **Segment:** The smallest unit of log storage in the **LogTablet**. A segment consists of an **.index** file and a **.log** data file.
67-
- **.index:** An `offset sparse index` that stores the mappings between the physical byte address in the message relative offset -> .log file.
67+
- **.index:** An `offset sparse index` that maps message relative offsets to their corresponding physical byte addresses in the .log file.
6868
- **.log:** Compact arrangement of log data.
6969

7070
### KvTablet
71-
Each bucket of the PrimaryKey table needs to generate a KvTablet. Underlying, each KvTablet corresponds to an embedded RocksDB instance. RocksDB is an LSM (log structured merge) engine which helps KvTablet supports high-performance updates and lookup query.
72-
73-
71+
Each bucket of the Primary Key Table needs to generate a KvTablet. Underlying, each KvTablet corresponds to an embedded RocksDB instance. RocksDB is an LSM (log structured merge) engine which helps KvTablet support high-performance updates and lookup queries.

website/docs/table-design/table-types/log-table.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ Log Tables in Fluss allow real-time data consumption, preserving the order of da
6262
## Column Pruning
6363

6464
Column pruning is a technique used to reduce the amount of data that needs to be read from storage by eliminating unnecessary columns from the query.
65-
Fluss supports column pruning for Log Tables and the changelog of PrimaryKey Tables, which can significantly improve query performance by reducing the amount of data that needs to be read from storage and lowering networking costs.
65+
Fluss supports column pruning for Log Tables and the changelog of Primary Key Tables, which can significantly improve query performance by reducing the amount of data that needs to be read from storage and lowering networking costs.
6666

6767
What sets Fluss apart is its ability to apply **column pruning during streaming reads**, a capability that is both unique and industry-leading. This ensures that even in real-time streaming scenarios, only the required columns are processed, minimizing resource usage and maximizing efficiency.
6868

@@ -90,7 +90,7 @@ Additionally, compression is applied to each column independently, preserving th
9090

9191
When compression is enabled:
9292
- For **Log Tables**, data is compressed by the writer on the client side, written in a compressed format, and decompressed by the log scanner on the client side.
93-
- For **PrimaryKey Table changelogs**, compression is performed server-side since the changelog is generated on the server.
93+
- For **Primary Key Table changelogs**, compression is performed server-side since the changelog is generated on the server.
9494

9595
Log compression significantly reduces networking and storage costs. Benchmark results demonstrate that using the ZSTD compression with level 3 achieves a compression ratio of approximately **5x** (e.g., reducing 5GB of data to 1GB).
9696
Furthermore, read/write throughput improves substantially due to reduced networking overhead.
@@ -133,4 +133,4 @@ In the above example, we set the compression codec to `LZ4_FRAME` and the compre
133133
:::
134134

135135
## Log Tiering
136-
Log Table supports tiering data to different storage tiers. See more details about [Remote Log](maintenance/tiered-storage/remote-storage.md).
136+
Log Table supports tiering data to different storage tiers. See more details about [Remote Log](maintenance/tiered-storage/remote-storage.md).
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
{
2-
"label": "PrimaryKey Table",
2+
"label": "Primary Key Table",
33
"position": 1
44
}

website/docs/table-design/table-types/pk-table/index.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: PrimaryKey Table
2+
title: Primary Key Table
33
sidebar_position: 1
44
---
55

@@ -21,15 +21,15 @@ sidebar_position: 1
2121
limitations under the License.
2222
-->
2323

24-
# PrimaryKey Table
24+
# Primary Key Table
2525

2626
## Basic Concept
2727

28-
PrimaryKey Table in Fluss ensure the uniqueness of the specified primary key and supports `INSERT`, `UPDATE`,
28+
Primary Key Table in Fluss ensures the uniqueness of the specified primary key and supports `INSERT`, `UPDATE`,
2929
and `DELETE` operations.
3030

31-
A PrimaryKey Table is created by specifying a `PRIMARY KEY` clause in the `CREATE TABLE` statement. For example, the
32-
following Flink SQL statement creates a PrimaryKey Table with `shop_id` and `user_id` as the primary key and distributes
31+
A Primary Key Table is created by specifying a `PRIMARY KEY` clause in the `CREATE TABLE` statement. For example, the
32+
following Flink SQL statement creates a Primary Key Table with `shop_id` and `user_id` as the primary key and distributes
3333
the data into 4 buckets:
3434

3535
```sql title="Flink SQL"
@@ -49,13 +49,13 @@ In Fluss primary key table, each row of data has a unique primary key.
4949
If multiple entries with the same primary key are written to the Fluss primary key table, only the last entry will be
5050
retained.
5151

52-
For [Partitioned PrimaryKey Table](table-design/data-distribution/partitioning.md), the primary key must contain the
52+
For [Partitioned Primary Key Table](table-design/data-distribution/partitioning.md), the primary key must contain the
5353
partition key.
5454

5555
## Bucket Assigning
5656

5757
For primary key tables, Fluss always determines which bucket the data belongs to based on the hash value of the bucket
58-
key (It must be a subset of the primary keys excluding partition keys of the primary key table) for each record. If the bucket key is not specified, the bucket key will used as the primary key (excluding the partition key).
58+
key (It must be a subset of the primary keys excluding partition keys of the primary key table) for each record. If the bucket key is not specified, the bucket key will be used as the primary key (excluding the partition key).
5959
Data with the same hash value will be distributed to the same bucket.
6060

6161
## Partial Update
@@ -94,20 +94,20 @@ follows:
9494

9595
## Merge Engines
9696

97-
The **Merge Engine** in Fluss is a core component designed to efficiently handle and consolidate data updates for PrimaryKey Tables.
97+
The **Merge Engine** in Fluss is a core component designed to efficiently handle and consolidate data updates for Primary Key Tables.
9898
It offers users the flexibility to define how incoming data records are merged with existing records sharing the same primary key.
99-
However, users can specify a different merge engine to customize the merging behavior according to their specific use cases
99+
However, users can specify a different merge engine to customize the merging behavior according to their specific use cases.
100100

101101
The following merge engines are supported:
102102

103-
1. [Default Merge Engine (LastRow)](table-design/table-types/pk-table/merge-engines/default.md)
104-
2. [FirstRow Merge Engine](table-design/table-types/pk-table/merge-engines/first-row.md)
105-
3. [Versioned Merge Engine](table-design/table-types/pk-table/merge-engines/versioned.md)
103+
1. [Default Merge Engine (LastRow)](merge-engines/default.md)
104+
2. [FirstRow Merge Engine](merge-engines/first-row.md)
105+
3. [Versioned Merge Engine](merge-engines/versioned.md)
106106

107107

108108
## Changelog Generation
109109

110-
Fluss will capture the changes when inserting, updating, deleting records on the primary-key table, which is known as
110+
Fluss will capture the changes when inserting, updating, deleting records on the Primary Key Table, which is known as
111111
the changelog. Downstream consumers can directly consume the changelog to obtain the changes in the table. For example,
112112
consider the following primary key table in Fluss:
113113

@@ -121,7 +121,7 @@ CREATE TABLE T
121121
);
122122
```
123123

124-
If the data written to the primary-key table is
124+
If the data written to the Primary Key Table is
125125
sequentially `+I(1, 2.0, 'apple')`, `+I(1, 4.0, 'banana')`, `-D(1, 4.0, 'banana')`, then the following change data will
126126
be generated. For example, the following Flink SQL statements illustrate this behavior:
127127

@@ -164,13 +164,13 @@ For primary key tables, Fluss supports various kinds of querying abilities.
164164
For a primary key table, the default read method is a full snapshot followed by incremental data. First, the
165165
snapshot data of the table is consumed, followed by the changelog data of the table.
166166

167-
It is also possible to only consume the changelog data of the table. For more details, please refer to the [Flink Reads](engine-flink/reads.md)
167+
It is also possible to only consume the changelog data of the table. For more details, please refer to the [Flink Reads](../../../engine-flink/reads.md)
168168

169169
### Lookup
170170

171-
Fluss primary key table can lookup data by the primary keys. If the key exists in Fluss, lookup will return a unique row. it always used in [Flink Lookup Join](engine-flink/lookups.md#lookup).
171+
Fluss primary key table can lookup data by the primary keys. If the key exists in Fluss, lookup will return a unique row. It is always used in [Flink Lookup Join](../../../engine-flink/lookups.md#lookup).
172172

173173
### Prefix Lookup
174174

175175
Fluss primary key table can also do prefix lookup by the prefix subset primary keys. Unlike lookup, prefix lookup
176-
will scan data based on the prefix of primary keys and may return multiple rows. It always used in [Flink Prefix Lookup Join](engine-flink/lookups.md#prefix-lookup).
176+
will scan data based on the prefix of primary keys and may return multiple rows. It is always used in [Flink Prefix Lookup Join](../../../engine-flink/lookups.md#prefix-lookup).

0 commit comments

Comments
 (0)