Skip to content

Commit dccccee

Browse files
authored
[docs]fix typos, grammar, casing, and spacing in multiple documents (#1474)
1 parent 9350bb2 commit dccccee

File tree

14 files changed

+81
-81
lines changed

14 files changed

+81
-81
lines changed

website/docs/engine-flink/datastream.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ sidebar_position: 6
77
## Overview
88
The Fluss DataStream Connector for Apache Flink provides a Flink DataStream source implementation for reading data from Fluss tables and a Flink DataStream sink implementation for writing data to Fluss tables. It allows you to seamlessly integrate Fluss tables with Flink's DataStream API, enabling you to process data from Fluss in your Flink applications.
99

10-
Key features of the Fluss Datastream Connector include:
10+
Key features of the Fluss DataStream Connector include:
1111
* Reading from both primary key tables and log tables
1212
* Support for projection pushdown to select specific fields
1313
* Flexible offset initialization strategies
@@ -191,7 +191,7 @@ DataStreamSource<RowData> stream = env.fromSource(
191191
// For INSERT, UPDATE_BEFORE, UPDATE_AFTER, DELETE events
192192
```
193193

194-
**Note:** If you are mapping from `RowData` to your pojos object, you might want to include the row kind operation.
194+
**Note:** If you are mapping from `RowData` to your POJO objects, you might want to include the row kind operation.
195195

196196
#### Reading from a Log Table
197197
When reading from a log table, all records are emitted with `RowKind.INSERT` since log tables only support appends.

website/docs/engine-flink/getting-started.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ For Flink's Table API, Fluss supports the following features:
2323
| Feature support | Flink | Notes |
2424
|---------------------------------------------------|-------|----------------------------------------|
2525
| [SQL create catalog](ddl.md#create-catalog) | ✔️ | |
26-
| [SQl create database](ddl.md#create-database) | ✔️ | |
26+
| [SQL create database](ddl.md#create-database) | ✔️ | |
2727
| [SQL drop database](ddl.md#drop-database) | ✔️ | |
2828
| [SQL create table](ddl.md#create-table) | ✔️ | |
2929
| [SQL create table like](ddl.md#create-table-like) | ✔️ | |
@@ -43,7 +43,7 @@ For Flink's DataStream API, you can see [DataStream API](docs/engine-flink/datas
4343
## Preparation when using Flink SQL Client
4444
- **Download Flink**
4545

46-
Flink runs on all UNIX-like environments, i.e. Linux, Mac OS X, and Cygwin (for Windows).
46+
Flink runs on all UNIX-like environments, i.e., Linux, Mac OS X, and Cygwin (for Windows).
4747
If you haven’t downloaded Flink, you can download [the binary release](https://flink.apache.org/downloads.html) of Flink, then extract the archive with the following command.
4848
```shell
4949
tar -xzf flink-1.20.1-bin-scala_2.12.tgz
@@ -70,7 +70,7 @@ You should be able to navigate to the web UI at [localhost:8081](http://localhos
7070
```shell
7171
ps aux | grep flink
7272
```
73-
- **Start a sql client**
73+
- **Start a SQL Client**
7474

7575
To quickly stop the cluster and all running components, you can use the provided script:
7676
```shell
@@ -92,7 +92,7 @@ CREATE CATALOG fluss_catalog WITH (
9292
you should start the Fluss server first. See [Deploying Fluss](install-deploy/overview.md#how-to-deploy-fluss)
9393
for how to build a Fluss cluster.
9494
Here, it is assumed that there is a Fluss cluster running on your local machine and the CoordinatorServer port is 9123.
95-
2. The` bootstrap.servers` configuration is used to discover all nodes within the Fluss cluster. It can be set with one or more (up to three) Fluss server addresses (either CoordinatorServer or TabletServer) separated by commas.
95+
2. The`bootstrap.servers` configuration is used to discover all nodes within the Fluss cluster. It can be set with one or more (up to three) Fluss server addresses (either CoordinatorServer or TabletServer) separated by commas.
9696
:::
9797

9898
## Creating a Table

website/docs/engine-flink/lookups.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -232,7 +232,7 @@ Continuing from the previous prefix lookup example, if our dimension table is a
232232
```sql title="Flink SQL"
233233
-- primary keys are (c_custkey, c_nationkey, dt)
234234
-- bucket key is (c_custkey)
235-
CREATE TABLE `fluss_catalog`.`my_db`.`customer_partitioned_with_bukcet_key` (
235+
CREATE TABLE `fluss_catalog`.`my_db`.`customer_partitioned_with_bucket_key` (
236236
`c_custkey` INT NOT NULL,
237237
`c_name` STRING NOT NULL,
238238
`c_address` STRING NOT NULL,
@@ -259,7 +259,7 @@ INSERT INTO prefix_lookup_join_sink
259259
SELECT `o`.`o_orderkey`, `o`.`o_totalprice`, `c`.`c_name`, `c`.`c_address`
260260
FROM
261261
(SELECT `orders_with_dt`.*, proctime() AS ptime FROM `orders_with_dt`) AS `o`
262-
LEFT JOIN `customer_partitioned_with_bukcet_key`
262+
LEFT JOIN `customer_partitioned_with_bucket_key`
263263
FOR SYSTEM_TIME AS OF `o`.`ptime` AS `c`
264264
ON `o`.`o_custkey` = `c`.`c_custkey` AND `o`.`o_dt` = `c`.`dt`;
265265

website/docs/engine-flink/reads.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,12 +7,12 @@ sidebar_position: 4
77
# Flink Reads
88
Fluss supports streaming and batch read with [Apache Flink](https://flink.apache.org/)'s SQL & Table API. Execute the following SQL command to switch execution mode from streaming to batch, and vice versa:
99
```sql title="Flink SQL"
10-
-- Execute the flink job in streaming mode for current session context
10+
-- Execute the Flink job in streaming mode for current session context
1111
SET 'execution.runtime-mode' = 'streaming';
1212
```
1313

1414
```sql title="Flink SQL"
15-
-- Execute the flink job in batch mode for current session context
15+
-- Execute the Flink job in batch mode for current session context
1616
SET 'execution.runtime-mode' = 'batch';
1717
```
1818

website/docs/engine-flink/writes.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Fluss primary key tables can accept all types of messages (`INSERT`, `UPDATE_BEF
1515
They support both streaming and batch modes and are compatible with primary-key tables (for upserting data) as well as log tables (for appending data).
1616

1717
### Appending Data to the Log Table
18-
#### Create a Log table.
18+
#### Create a Log Table.
1919
```sql title="Flink SQL"
2020
CREATE TABLE log_table (
2121
order_id BIGINT,
@@ -25,7 +25,7 @@ CREATE TABLE log_table (
2525
);
2626
```
2727

28-
#### Insert data into the Log table.
28+
#### Insert Data into the Log Table.
2929
```sql title="Flink SQL"
3030
CREATE TEMPORARY TABLE source (
3131
order_id BIGINT,
@@ -91,15 +91,15 @@ SELECT shop_id, user_id, num_orders FROM source;
9191

9292
Fluss supports deleting data for primary-key tables in batch mode via `DELETE FROM` statement. Currently, only single data deletions based on the primary key are supported.
9393

94-
* the primary key table
94+
* the Primary Key Table
9595
```sql title="Flink SQL"
9696
-- DELETE statement requires batch mode
9797
SET 'execution.runtime-mode' = 'batch';
9898
```
9999

100100
```sql title="Flink SQL"
101101
-- The condition must include all primary key equality conditions.
102-
DELETE FROM pk_table WHERE shop_id = 10000 and user_id = 123456;
102+
DELETE FROM pk_table WHERE shop_id = 10000 AND user_id = 123456;
103103
```
104104

105105
## UPDATE
@@ -112,5 +112,5 @@ SET execution.runtime-mode = batch;
112112

113113
```sql title="Flink SQL"
114114
-- The condition must include all primary key equality conditions.
115-
UPDATE pk_table SET total_amount = 2 WHERE shop_id = 10000 and user_id = 123456;
115+
UPDATE pk_table SET total_amount = 2 WHERE shop_id = 10000 AND user_id = 123456;
116116
```

website/docs/install-deploy/deploying-with-docker.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -316,7 +316,7 @@ volumes:
316316

317317
### Launch the components
318318

319-
Save the `docker-compose.yaml` script and execute the `docker compose up -d` command in the same directory
319+
Save the `docker-compose.yml` script and execute the `docker compose up -d` command in the same directory
320320
to create the cluster.
321321

322322
Run the below command to check the container status:

0 commit comments

Comments
 (0)