Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 43 additions & 0 deletions .github/workflows/docs-check.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
################################################################################
# Copyright (c) 2025 Alibaba Group Holding Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

# This workflow is meant for checking broken links in the documentation.
name: Checks Documentation
on:
pull_request:
branches: [main, release-*, ci-*]
paths:
- 'website/**'

jobs:
test-deploy:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./website
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Generate versioned docs
run: ./build_versioned_docs.sh
- uses: actions/setup-node@v4
with:
node-version: 18
- name: Install dependencies
run: npm install
- name: Test build website
run: npm run build -- --no-minify
51 changes: 51 additions & 0 deletions .github/workflows/docs-deploy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
################################################################################
# Copyright (c) 2025 Alibaba Group Holding Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
name: Deploy Documentation
on:
push:
branches: [main, release-*]
paths:
- 'website/**'

jobs:
deploy:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./website
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Generate versioned docs
run: ./build_versioned_docs.sh
- uses: actions/setup-node@v4
with:
node-version: 18
- name: Install dependencies
run: npm install
- name: Build website
run: npm run build -- --no-minify
- uses: webfactory/[email protected]
with:
ssh-private-key: ${{ secrets.GH_PAGES_DEPLOY }}
- name: Deploy website
env:
USE_SSH: true
run: |
git config --global user.email "[email protected]"
git config --global user.name "gh-actions"
npm run deploy -- --skip-build
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -33,3 +33,6 @@ website/npm-debug.log*
website/yarn-debug.log*
website/yarn-error.log*
website/package-lock.json
website/versioned_docs
website/versioned_sidebars
website/versions.json
133 changes: 133 additions & 0 deletions website/build_versioned_docs.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
#!/usr/bin/env bash
#
# Copyright (c) 2025 Alibaba Group Holding Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

SCRIPT_PATH=$(cd "$(dirname "$0")" && pwd)
VERSIONED_DOCS="$SCRIPT_PATH/versioned_docs"
VERSIONED_SIDEBARS="$SCRIPT_PATH/versioned_sidebars"

mkdir -p "$VERSIONED_DOCS"
mkdir -p "$VERSIONED_SIDEBARS"

# Configure the remote repository URL and temporary directory
REPO_URL="https://github.com/alibaba/fluss.git"
TEMP_DIR=$(mktemp -d)

# Check if the temporary directory was successfully created
if [ ! -d "$TEMP_DIR" ]; then
echo "Failed to create temporary directory"
exit 1
fi

echo "Cloning remote repository to temporary directory: $TEMP_DIR"
git clone "$REPO_URL" "$TEMP_DIR"

# Enter the temporary directory
cd "$TEMP_DIR" || { echo "Failed to enter temporary directory"; exit 1; }


# Match branches in the format "release-x.y"
regex='release-[0-9]+\.[0-9]+$' # Regular expression to match release-x.y
branches=$(git branch -a | grep -E "$regex") # Filter branches that match the criteria

# Exit the script if no matching branches are found
if [ -z "$branches" ]; then
echo "No branches matching 'release-x.y' format found"
exit 0
fi

echo "Matched branches:"
echo "$branches"

##################################################################################################
# Generate versions.json file
##################################################################################################

# Initialize JSON array
versions_json="["

# Iterate over each matched branch
for branch in $branches; do
# Extract the version number part (remove the "release-" prefix)
version=$(echo "$branch" | sed 's|remotes/origin/release-||')

# Add to the JSON array
versions_json+="\"$version\", "
done

# Remove the last comma and space, and close the JSON array
versions_json="${versions_json%, }]"
echo "Generated the versions JSON: $versions_json"

# Output to versions.json file
echo "$versions_json" > "$SCRIPT_PATH/versions.json"
echo "Operation completed! Versions information has been saved to $SCRIPT_PATH/versions.json file."

##################################################################################################
# Generate versioned sidebars JSON file
##################################################################################################

sidebar_json='{
"docsSidebar": [
{
"type": "autogenerated",
"dirName": "."
}
]
}'

# handle OS-specific cp command
if [ "$(uname)" == "Darwin" ]; then
CP_CMD="cp -R website/docs/ "
else
CP_CMD="cp -r website/docs/* "
fi

# Iterate over each matched branch
for branch in $branches; do
# Remove the remote branch prefix "remotes/origin/"
clean_branch_name=$(echo "$branch" | sed 's|remotes/origin/||')
version=$(echo "$branch" | sed 's|remotes/origin/release-||')

echo "Processing branch: $clean_branch_name"

# 检出分支
git checkout "$clean_branch_name" || { echo "Failed to checkout branch: $clean_branch_name"; continue; }

version_sidebar_file="$VERSIONED_SIDEBARS/version-$version-sidebars.json"
echo "$sidebar_json" > "$version_sidebar_file" || { echo "Failed to generate sidebar file for version '$version'"; continue; }
echo "Generated sidebar file for version '$version': $version_sidebar_file"

# Check if the website/docs directory exists
if [ -d "website/docs" ]; then
# Create the target subdirectory (named after the branch)
version_dir="$VERSIONED_DOCS/version-$version"
mkdir -p "$version_dir"

# Copy the website/docs directory to the target directory
$CP_CMD "$version_dir/" || { echo "Failed to copy for branch: $clean_branch_name"; continue; }
echo "Copied documentation for branch '$clean_branch_name' to '$version_dir'"
else
echo "The website/docs directory does not exist in branch '$clean_branch_name', skipping..."
fi
done

# Clean up the temporary directory
echo "Cleaning up temporary directory: $TEMP_DIR"

rm -rf "$TEMP_DIR"

echo "Build versioned docs completed!"
22 changes: 0 additions & 22 deletions website/deploy.sh

This file was deleted.

8 changes: 4 additions & 4 deletions website/docs/engine-flink/ddl.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ CREATE TABLE my_part_log_table (
) PARTITIONED BY (dt);
```
:::note
After the Partitioned (PrimaryKey/Log) Table is created, you need first manually create the corresponding partition using the [Add Partition](/docs/engine-flink/ddl.md#add-partition) statement
After the Partitioned (PrimaryKey/Log) Table is created, you need first manually create the corresponding partition using the [Add Partition](engine-flink/ddl.md#add-partition) statement
before you write/read data into this partition.
:::

Expand Down Expand Up @@ -157,7 +157,7 @@ CREATE TABLE my_auto_part_log_table (
);
```

For more details about Auto Partitioned (PrimaryKey/Log) Table, refer to [Auto Partitioning Options](/docs/table-design/data-distribution/partitioning/#auto-partitioning-options).
For more details about Auto Partitioned (PrimaryKey/Log) Table, refer to [Auto Partitioning Options](table-design/data-distribution/partitioning.md#auto-partitioning-options).

### Options

Expand All @@ -167,8 +167,8 @@ The supported option in "with" parameters when creating a table are as follows:
|------------------------------------|----------|----------|-------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| bucket.num | int | optional | The bucket number of Fluss cluster. | The number of buckets of a Fluss table. |
| bucket.key | String | optional | (none) | Specific the distribution policy of the Fluss table. Data will be distributed to each bucket according to the hash value of bucket-key. If you specify multiple fields, delimiter is ','. If the table is with primary key, you can't specific bucket key currently. The bucket keys will always be the primary key(excluding partition key). If the table is not with primary key, you can specific bucket key, and when the bucket key is not specified, the data will be distributed to each bucket randomly. |
| table.* | | | | All the [`table.` prefix configuration](/docs/maintenance/configuration.md) are supported to be defined in "with" options. |
| client.* | | | | All the [`client.` prefix configuration](/docs/maintenance/configuration.md) are supported to be defined in "with" options. |
| table.* | | | | All the [`table.` prefix configuration](maintenance/configuration.md) are supported to be defined in "with" options. |
| client.* | | | | All the [`client.` prefix configuration](maintenance/configuration.md) are supported to be defined in "with" options. |

## Create Table Like

Expand Down
10 changes: 5 additions & 5 deletions website/docs/engine-flink/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@ sidebar_position: 1

# Getting Started with Flink Engine
## Quick Start
For a quick introduction to running Flink, refer to the [Quick Start](/docs/quickstart/flink.md) guide.
For a quick introduction to running Flink, refer to the [Quick Start](quickstart/flink.md) guide.


## Support Flink Versions
| Fluss Connector Versions | Supported Flink Versions |
|--------------------------|--------------------------|
| 0.5 | 1.18, 1.19, 1.20 |
| $FLUSS_VERSION_SHORT$ | 1.18, 1.19, 1.20 |


## Feature Support
Expand Down Expand Up @@ -43,10 +43,10 @@ tar -xzf flink-1.20.1-bin-scala_2.12.tgz
Download [Fluss connector jar](/downloads#fluss-connector) and copy to the lib directory of your Flink home.

```shell
cp fluss-connector-flink-<fluss-version>.jar <FLINK_HOME>/lib/
cp fluss-connector-flink-$FLUSS_VERSION$.jar <FLINK_HOME>/lib/
```
:::note
If you use [Amazon S3](http://aws.amazon.com/s3/), [Aliyun OSS](https://www.aliyun.com/product/oss) or [HDFS(Hadoop Distributed File System)](https://hadoop.apache.org/docs/stable/) as Fluss's [remote storage](/docs/maintenance/tiered-storage/remote-storage),
If you use [Amazon S3](http://aws.amazon.com/s3/), [Aliyun OSS](https://www.aliyun.com/product/oss) or [HDFS(Hadoop Distributed File System)](https://hadoop.apache.org/docs/stable/) as Fluss's [remote storage](maintenance/tiered-storage/remote-storage.md),
you should download the corresponding [Fluss filesystem jar](/downloads#filesystem-jars) and also copy it to the lib directory of your Flink home.
:::

Expand Down Expand Up @@ -79,7 +79,7 @@ CREATE CATALOG fluss_catalog WITH (

:::note
1. The `bootstrap.servers` means the Fluss server address. Before you config the `bootstrap.servers`,
you should start the Fluss server first. See [Deploying Fluss](/docs/install-deploy/overview/#how-to-deploy-fluss)
you should start the Fluss server first. See [Deploying Fluss](install-deploy/overview.md#how-to-deploy-fluss)
for how to build a Fluss cluster.
Here, it is assumed that there is a Fluss cluster running on your local machine and the CoordinatorServer port is 9123.
2. The` bootstrap.servers` configuration is used to discover all nodes within the Fluss cluster. It can be set with one or more (up to three) Fluss server addresses (either CoordinatorServer or TabletServer) separated by commas.
Expand Down
4 changes: 2 additions & 2 deletions website/docs/engine-flink/lookups.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ FOR SYSTEM_TIME AS OF `o`.`ptime` AS `c`
ON `o`.`o_custkey` = `c`.`c_custkey` AND `o`.`o_dt` = `c`.`dt`;
```

For more details about Fluss partitioned table, see [Partitioned Tables](/docs/table-design/data-distribution/partitioning.md).
For more details about Fluss partitioned table, see [Partitioned Tables](table-design/data-distribution/partitioning.md).

### Lookup Options

Expand Down Expand Up @@ -266,4 +266,4 @@ ON `o`.`o_custkey` = `c`.`c_custkey` AND `o`.`o_dt` = `c`.`dt`;
-- join key is a prefix set of dimension table primary keys (excluding partition key) + partition key.
```

For more details about Fluss partitioned table, see [Partitioned Tables](/docs/table-design/data-distribution/partitioning.md).
For more details about Fluss partitioned table, see [Partitioned Tables](table-design/data-distribution/partitioning.md).
10 changes: 5 additions & 5 deletions website/docs/install-deploy/deploying-distributed-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,8 +47,8 @@ Node1 will deploy the CoordinatorServer and one TabletServer, Node2 and Node3 wi
Go to the [downloads page](/downloads) and download the latest Fluss release. After downloading the latest release, copy the archive to all the nodes and extract it:

```shell
tar -xzf fluss-<fluss-version>-bin.tgz
cd fluss-<fluss-version>/
tar -xzf fluss-$FLUSS_VERSION$-bin.tgz
cd fluss-$FLUSS_VERSION$/
```

### Configuring Fluss
Expand Down Expand Up @@ -86,7 +86,7 @@ tablet-server.id: 3

:::note
- `tablet-server.id` is the unique id of the TabletServer, if you have multiple TabletServers, you should set different id for each TabletServer.
- In this example, we only set the properties that must be configured, and for some other properties, you can refer to [Configuration](/docs/maintenance/configuration/) for more details.
- In this example, we only set the properties that must be configured, and for some other properties, you can refer to [Configuration](maintenance/configuration.md) for more details.
:::

### Starting Fluss
Expand Down Expand Up @@ -121,7 +121,7 @@ Using Flink SQL Client to interact with Fluss.

#### Preparation

You can start a Flink standalone cluster refer to [Flink Environment Preparation](/docs/engine-flink/getting-started#preparation-when-using-flink-sql-client)
You can start a Flink standalone cluster refer to [Flink Environment Preparation](engine-flink/getting-started.md#preparation-when-using-flink-sql-client)

**Note**: Make sure the [Fluss connector jar](/downloads/) already has copied to the `lib` directory of your Flink home.

Expand All @@ -138,4 +138,4 @@ CREATE CATALOG fluss_catalog WITH (
#### Do more with Fluss

After the catalog is created, you can use Flink SQL Client to do more with Fluss, for example, create a table, insert data, query data, etc.
More details please refer to [Flink Getting Started](/docs/engine-flink/getting-started/).
More details please refer to [Flink Getting Started](engine-flink/getting-started.md).
Loading