You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After the Partitioned (PrimaryKey/Log) Table is created, you need first manually create the corresponding partition using the [Add Partition](/docs/engine-flink/ddl.md#add-partition) statement
121
+
After the Partitioned (PrimaryKey/Log) Table is created, you need first manually create the corresponding partition using the [Add Partition](engine-flink/ddl.md#add-partition) statement
For more details about Auto Partitioned (PrimaryKey/Log) Table, refer to [Auto Partitioning Options](/docs/table-design/data-distribution/partitioning/#auto-partitioning-options).
160
+
For more details about Auto Partitioned (PrimaryKey/Log) Table, refer to [Auto Partitioning Options](table-design/data-distribution/partitioning.md#auto-partitioning-options).
161
161
162
162
### Options
163
163
@@ -167,8 +167,8 @@ The supported option in "with" parameters when creating a table are as follows:
| bucket.num | int | optional | The bucket number of Fluss cluster. | The number of buckets of a Fluss table. |
169
169
| bucket.key | String | optional | (none) | Specific the distribution policy of the Fluss table. Data will be distributed to each bucket according to the hash value of bucket-key. If you specify multiple fields, delimiter is ','. If the table is with primary key, you can't specific bucket key currently. The bucket keys will always be the primary key(excluding partition key). If the table is not with primary key, you can specific bucket key, and when the bucket key is not specified, the data will be distributed to each bucket randomly. |
170
-
| table.*|||| All the [`table.` prefix configuration](/docs/maintenance/configuration.md) are supported to be defined in "with" options. |
171
-
| client.*|||| All the [`client.` prefix configuration](/docs/maintenance/configuration.md) are supported to be defined in "with" options. |
170
+
| table.*|||| All the [`table.` prefix configuration](maintenance/configuration.md) are supported to be defined in "with" options. |
171
+
| client.*|||| All the [`client.` prefix configuration](maintenance/configuration.md) are supported to be defined in "with" options. |
If you use [Amazon S3](http://aws.amazon.com/s3/), [Aliyun OSS](https://www.aliyun.com/product/oss) or [HDFS(Hadoop Distributed File System)](https://hadoop.apache.org/docs/stable/) as Fluss's [remote storage](/docs/maintenance/tiered-storage/remote-storage),
49
+
If you use [Amazon S3](http://aws.amazon.com/s3/), [Aliyun OSS](https://www.aliyun.com/product/oss) or [HDFS(Hadoop Distributed File System)](https://hadoop.apache.org/docs/stable/) as Fluss's [remote storage](maintenance/tiered-storage/remote-storage.md),
50
50
you should download the corresponding [Fluss filesystem jar](/downloads#filesystem-jars) and also copy it to the lib directory of your Flink home.
51
51
:::
52
52
@@ -79,7 +79,7 @@ CREATE CATALOG fluss_catalog WITH (
79
79
80
80
:::note
81
81
1. The `bootstrap.servers` means the Fluss server address. Before you config the `bootstrap.servers`,
82
-
you should start the Fluss server first. See [Deploying Fluss](/docs/install-deploy/overview/#how-to-deploy-fluss)
82
+
you should start the Fluss server first. See [Deploying Fluss](install-deploy/overview.md#how-to-deploy-fluss)
83
83
for how to build a Fluss cluster.
84
84
Here, it is assumed that there is a Fluss cluster running on your local machine and the CoordinatorServer port is 9123.
85
85
2. The` bootstrap.servers` configuration is used to discover all nodes within the Fluss cluster. It can be set with one or more (up to three) Fluss server addresses (either CoordinatorServer or TabletServer) separated by commas.
Copy file name to clipboardExpand all lines: website/docs/install-deploy/deploying-distributed-cluster.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,8 +47,8 @@ Node1 will deploy the CoordinatorServer and one TabletServer, Node2 and Node3 wi
47
47
Go to the [downloads page](/downloads) and download the latest Fluss release. After downloading the latest release, copy the archive to all the nodes and extract it:
48
48
49
49
```shell
50
-
tar -xzf fluss-<fluss-version>-bin.tgz
51
-
cd fluss-<fluss-version>/
50
+
tar -xzf fluss-$FLUSS_VERSION$-bin.tgz
51
+
cd fluss-$FLUSS_VERSION$/
52
52
```
53
53
54
54
### Configuring Fluss
@@ -86,7 +86,7 @@ tablet-server.id: 3
86
86
87
87
:::note
88
88
- `tablet-server.id` is the unique id of the TabletServer, if you have multiple TabletServers, you should set different id for each TabletServer.
89
-
- In this example, we only set the properties that must be configured, and for some other properties, you can refer to [Configuration](/docs/maintenance/configuration/) for more details.
89
+
- In this example, we only set the properties that must be configured, and for some other properties, you can refer to [Configuration](maintenance/configuration.md) for more details.
90
90
:::
91
91
92
92
### Starting Fluss
@@ -121,7 +121,7 @@ Using Flink SQL Client to interact with Fluss.
121
121
122
122
#### Preparation
123
123
124
-
You can start a Flink standalone cluster refer to [Flink Environment Preparation](/docs/engine-flink/getting-started#preparation-when-using-flink-sql-client)
124
+
You can start a Flink standalone cluster refer to [Flink Environment Preparation](engine-flink/getting-started.md#preparation-when-using-flink-sql-client)
125
125
126
126
**Note**: Make sure the [Fluss connector jar](/downloads/) already has copied to the `lib` directory of your Flink home.
127
127
@@ -138,4 +138,4 @@ CREATE CATALOG fluss_catalog WITH (
138
138
#### Do more with Fluss
139
139
140
140
After the catalog is created, you can use Flink SQL Client to do more with Fluss, for example, create a table, insert data, query data, etc.
141
-
More details please refer to [Flink Getting Started](/docs/engine-flink/getting-started/).
141
+
More details please refer to [Flink Getting Started](engine-flink/getting-started.md).
Copy file name to clipboardExpand all lines: website/docs/install-deploy/deploying-local-cluster.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,8 +25,8 @@ Go to the [downloads page](/downloads) and download the latest Fluss release. Ma
25
25
package **matching your Java version**. After downloading the latest release, extract it:
26
26
27
27
```shell
28
-
tar -xzf fluss-<fluss-version>-bin.tgz
29
-
cd fluss-<fluss-version>/
28
+
tar -xzf fluss-$FLUSS_VERSION$-bin.tgz
29
+
cd fluss-$FLUSS_VERSION$/
30
30
```
31
31
32
32
## Starting Fluss Local Cluster
@@ -49,7 +49,7 @@ Using Flink SQL Client to interact with Fluss.
49
49
50
50
#### Preparation
51
51
52
-
You can start a Flink standalone cluster refer to [Flink Environment Preparation](/docs/engine-flink/getting-started#preparation-when-using-flink-sql-client)
52
+
You can start a Flink standalone cluster refer to [Flink Environment Preparation](engine-flink/getting-started.md#preparation-when-using-flink-sql-client)
53
53
54
54
**Note**: Make sure the [Fluss connector jar](/downloads/) already has copied to the `lib` directory of your Flink home.
55
55
@@ -66,4 +66,4 @@ CREATE CATALOG fluss_catalog WITH (
66
66
#### Do more with Fluss
67
67
68
68
After the catalog is created, you can use Flink SQL Client to do more with Fluss, for example, create a table, insert data, query data, etc.
69
-
More details please refer to [Flink Getting started](/docs/engine-flink/getting-started/)
69
+
More details please refer to [Flink Getting started](engine-flink/getting-started.md)
0 commit comments