Skip to content

Commit 03731f5

Browse files
authored
[doc] fix typos, formatting and consistency in architecture.md (#1390)
1 parent c80d149 commit 03731f5

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

website/docs/concepts/architecture.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -34,12 +34,12 @@ Additionally, it coordinates critical operations such as:
3434
- Managing data migration and service node switching in the event of node failures.
3535
- Overseeing table management tasks, including creating or deleting tables and updating bucket counts.
3636

37-
As the **brain** of the cluster, the **Coordinator Server** ensures efficient cluster operation and seamless management of resources.
37+
As the **brain** of the cluster, the **CoordinatorServer** ensures efficient cluster operation and seamless management of resources.
3838

3939
## TabletServer
4040
The **TabletServer** is responsible for data storage, persistence, and providing I/O services directly to users. It comprises two key components: **LogStore** and **KvStore**.
4141
- For **PrimaryKey Tables** which support updates, both **LogStore** and **KvStore** are activated. The KvStore is used to support updates and point lookup efficiently. LogStore is used to store the changelogs of the table.
42-
- For **Log Tables** which only supports appends, only the **LogStore** is activated, optimizing performance for write-heavy workloads.
42+
- For **Log Tables** which only support appends, only the **LogStore** is activated, optimizing performance for write-heavy workloads.
4343

4444
This architecture ensures the **TabletServer** delivers tailored data handling capabilities based on table types.
4545

@@ -73,4 +73,4 @@ In upcoming releases, **ZooKeeper will be replaced** by **KvStore** for metadata
7373
Additionally, **Remote Storage** allows clients to perform bulk read operations on Log and Kv data, enhancing data analysis efficiency and reduce the overhead on Fluss servers. In the future, it will also support bulk write operations, optimizing data import workflows for greater scalability and performance.
7474

7575
## Client
76-
Fluss clients/sdks support streaming reads/writes, batch read/writes, DDL and point queries. Currently, the main implementation of client is Flink Connector. Users can use Flink SQL to easily operate Fluss tables and data.
76+
Fluss clients/SDKs support streaming reads/writes, batch reads/writes, DDL and point queries. Currently, the main implementation of client is Flink Connector. Users can use Flink SQL to easily operate Fluss tables and data.

0 commit comments

Comments
 (0)