Skip to content

Commit 8e88882

Browse files
committed
fix docs
1 parent 3081e71 commit 8e88882

File tree

12 files changed

+33
-33
lines changed

12 files changed

+33
-33
lines changed

website/docs/engine-flink/datastream.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: "DataStream API"
3-
sidebar_position: 7
3+
sidebar_position: 8
44
---
55

66
# DataStream API
Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ The following properties can be set if using the Fluss catalog:
2727
| bootstrap.servers | required | (none) | Comma separated list of Fluss servers. |
2828
| default-database | optional | fluss | The default database to use when switching to this catalog. |
2929
| client.security.protocol | optional | PLAINTEXT | The security protocol used to communicate with brokers. Currently, only `PLAINTEXT` and `SASL` are supported, the configuration value is case insensitive. |
30-
| `client.security.{protocol}.*` | optional | (none) | Client-side configuration properties for a specific authentication protocol. E.g., client.security.sasl.jaas.config. More Details in [authentication](../../security/authentication.md) |
30+
| `client.security.{protocol}.*` | optional | (none) | Client-side configuration properties for a specific authentication protocol. E.g., client.security.sasl.jaas.config. More Details in [authentication](../security/authentication.md) |
3131
| `{lake-format}.*` | optional | (none) | Extra properties to be passed to the lake catalog. This is useful for configuring sensitive settings, such as the username and password required for lake catalog authentication. E.g., `paimon.jdbc.password = pass`. |
3232

3333
The following statements assume that the current catalog has been switched to the Fluss catalog using the `USE CATALOG <catalog_name>` statement.
@@ -125,7 +125,7 @@ CREATE TABLE my_part_log_table (
125125
Fluss partitioned table supports dynamic partition creation, which means you can write data into a partition without pre-creating it.
126126
You can use the `INSERT INTO` statement to write data into a partitioned table, and Fluss will automatically create the partition if it does not exist.
127127
See the [Dynamic Partitioning](table-design/data-distribution/partitioning.md#dynamic-partitioning) for more details.
128-
But you can still use the [Add Partition](engine-flink/ddl/index.md#add-partition) statement to manually add partitions if needed.
128+
But you can still use the [Add Partition](engine-flink/ddl.md#add-partition) statement to manually add partitions if needed.
129129
:::
130130

131131
#### Multi-Fields Partitioned Table

website/docs/engine-flink/delta-joins.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
sidebar_label: Delta Joins
33
title: Flink Delta Joins
4-
sidebar_position: 6
4+
sidebar_position: 7
55
---
66

77
# The Delta Join

website/docs/engine-flink/getting-started.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -22,19 +22,19 @@ For Flink's Table API, Fluss supports the following features:
2222

2323
| Feature Support | Flink | Notes |
2424
|---------------------------------------------------|-------|----------------------------------------|
25-
| [SQL Create Catalog](ddl/index.md#create-catalog) | ✔️ | |
26-
| [SQL Create Database](ddl/index.md#create-database) | ✔️ | |
27-
| [SQL Drop Database](ddl/index.md#drop-database) | ✔️ | |
28-
| [SQL Create Table](ddl/index.md#create-table) | ✔️ | |
29-
| [SQL Create Table Like](ddl/index.md#create-table-like) | ✔️ | |
30-
| [SQL Drop Table](ddl/index.md#drop-table) | ✔️ | |
31-
| [SQL Create Materialized Table](ddl/index.md#materialized-table) | ✔️ | Continuous refresh mode only |
32-
| [SQL Alter Materialized Table](ddl/index.md#alter-materialized-table) | ✔️ | Suspend/Resume support |
33-
| [SQL Drop Materialized Table](ddl/index.md#drop-materialized-table) | ✔️ | |
34-
| [SQL Show Partitions](ddl/index.md#show-partitions) | ✔️ | |
35-
| [SQL Add Partition](ddl/index.md#add-partition) | ✔️ | |
36-
| [SQL Drop Partition](ddl/index.md#drop-partition) | ✔️ | |
37-
| [Procedures](ddl/index.md#procedures) | ✔️ | ACL management and cluster configuration |
25+
| [SQL Create Catalog](ddl.md#create-catalog) | ✔️ | |
26+
| [SQL Create Database](ddl.md#create-database) | ✔️ | |
27+
| [SQL Drop Database](ddl.md#drop-database) | ✔️ | |
28+
| [SQL Create Table](ddl.md#create-table) | ✔️ | |
29+
| [SQL Create Table Like](ddl.md#create-table-like) | ✔️ | |
30+
| [SQL Drop Table](ddl.md#drop-table) | ✔️ | |
31+
| [SQL Create Materialized Table](ddl.md#materialized-table) | ✔️ | Continuous refresh mode only |
32+
| [SQL Alter Materialized Table](ddl.md#alter-materialized-table) | ✔️ | Suspend/Resume support |
33+
| [SQL Drop Materialized Table](ddl.md#drop-materialized-table) | ✔️ | |
34+
| [SQL Show Partitions](ddl.md#show-partitions) | ✔️ | |
35+
| [SQL Add Partition](ddl.md#add-partition) | ✔️ | |
36+
| [SQL Drop Partition](ddl.md#drop-partition) | ✔️ | |
37+
| [Procedures](ddl.md#procedures) | ✔️ | ACL management and cluster configuration |
3838
| [SQL Select](reads.md) | ✔️ | Support both streaming and batch mode. |
3939
| [SQL Limit](reads.md#limit-read) | ✔️ | Only for Log Table |
4040
| [SQL Insert Into](writes.md) | ✔️ | Support both streaming and batch mode. |

website/docs/engine-flink/lookups.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
sidebar_label: Lookups
33
title: Flink Lookup Joins
4-
sidebar_position: 5
4+
sidebar_position: 6
55
---
66

77
# Flink Lookup Joins

website/docs/engine-flink/options.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Connector Options
3-
sidebar_position: 8
3+
sidebar_position: 9
44
---
55

66
# Connector Options
@@ -57,7 +57,7 @@ Using `ALTER TABLE ... SET` statement to modify the table options. For example,
5757
ALTER TABLE log_table SET ('table.datalake.enable' = 'true');
5858
```
5959

60-
See more details about [ALTER TABLE ... SET](engine-flink/ddl/index.md#set-properties) and [ALTER TABLE ... RESET](engine-flink/ddl/index.md#reset-properties) documentation.
60+
See more details about [ALTER TABLE ... SET](engine-flink/ddl.md#set-properties) and [ALTER TABLE ... RESET](engine-flink/ddl.md#reset-properties) documentation.
6161

6262
## Storage Options
6363

website/docs/engine-flink/ddl/procedures.md renamed to website/docs/engine-flink/procedures.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,9 @@ SHOW PROCEDURES;
1818

1919
## Access Control Procedures
2020

21-
Fluss provides procedures to manage Access Control Lists (ACLs) for security and authorization.
21+
Fluss provides procedures to manage Access Control Lists (ACLs) for security and authorization. See the [Security](../security/overview.md) documentation for more details.
2222

23-
## add_acl
23+
### add_acl
2424

2525
Add an ACL entry to grant permissions to a principal.
2626

@@ -69,7 +69,7 @@ CALL sys.add_acl(
6969
);
7070
```
7171

72-
## drop_acl
72+
### drop_acl
7373

7474
Remove an ACL entry to revoke permissions.
7575

@@ -114,7 +114,7 @@ CALL sys.drop_acl(
114114
);
115115
```
116116

117-
## list_acl
117+
### list_acl
118118

119119
List ACL entries matching the specified filters.
120120

@@ -162,7 +162,7 @@ CALL sys.list_acl(
162162

163163
Fluss provides procedures to dynamically manage cluster configurations without requiring a server restart.
164164

165-
## get_cluster_config
165+
### get_cluster_config
166166

167167
Retrieve cluster configuration values.
168168

@@ -200,7 +200,7 @@ CALL sys.get_cluster_config(
200200
CALL sys.get_cluster_config();
201201
```
202202

203-
## set_cluster_config
203+
### set_cluster_config
204204

205205
Set or delete a cluster configuration dynamically.
206206

website/docs/engine-flink/reads.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
sidebar_label: Reads
33
title: Flink Reads
4-
sidebar_position: 4
4+
sidebar_position: 5
55
---
66

77
# Flink Reads

website/docs/engine-flink/writes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
sidebar_label: Writes
33
title: Flink Writes
4-
sidebar_position: 3
4+
sidebar_position: 4
55
---
66

77
# Flink Writes

website/docs/maintenance/operations/updating-configs.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ Currently, the supported dynamically updatable server configurations include:
1818
- `kv.rocksdb.shared-rate-limiter.bytes-per-sec`: Control RocksDB flush and compaction write rate shared across all RocksDB instances on the TabletServer. The rate limiter is always enabled. Set to a lower value (e.g., 100MB) to limit the rate, or a very high value to effectively disable rate limiting.
1919

2020

21-
You can update the configuration of a cluster with [Java client](#using-java-client) or [Flink Stored Procedures](../../engine-flink/ddl/procedures.md#cluster-configuration-procedures).
21+
You can update the configuration of a cluster with [Java client](#using-java-client) or [Flink Procedures](../../engine-flink/procedures.md#cluster-configuration-procedures).
2222

2323
### Using Java Client
2424

@@ -48,11 +48,11 @@ The `AlterConfig` class contains three properties:
4848

4949
### Using Flink Stored Procedures
5050

51-
For certain configurations, Fluss provides convenient Flink stored procedures that can be called directly from Flink SQL. See [Procedures](engine-flink/ddl/procedures.md#cluster-configuration-procedures) for detailed documentation on using `get_cluster_config` and `set_cluster_config` procedures.
51+
For certain configurations, Fluss provides convenient Flink stored procedures that can be called directly from Flink SQL. See [Procedures](engine-flink/procedures.md#cluster-configuration-procedures) for detailed documentation on using `get_cluster_config` and `set_cluster_config` procedures.
5252

5353
## Updating Table Configs
5454

55-
The connector options on a table including [Storage Options](engine-flink/options.md#storage-options) can be updated dynamically by [ALTER TABLE ... SET](engine-flink/ddl/index.md#alter-table) statement. See the example below:
55+
The connector options on a table including [Storage Options](engine-flink/options.md#storage-options) can be updated dynamically by [ALTER TABLE ... SET](engine-flink/ddl.md#alter-table) statement. See the example below:
5656

5757
```sql
5858
-- Enable lakehouse storage for the given table

0 commit comments

Comments
 (0)