Skip to content

Clickhouse-Kafka-Connect have less through put #487

Open
@amolsr

Description

@amolsr

Is your feature request related to a problem? Please describe.
We need to build a dimension table for a ClickHouse dataset that undergoes frequent updates. However, the current throughput is limited to approximately 300-500 messages per second, which is significantly lower than the recommended batch insertion guidelines of 10,000 to 100,000 rows. This low throughput is a bottleneck for maintaining the table efficiently.

Describe the solution you'd like
Develop a native TCP/IP-based connector optimized for high-throughput data ingestion into ClickHouse. This connector should leverage ClickHouse's native protocol to efficiently batch and insert large volumes of data, enabling adherence to the recommended batch sizes and significantly improving performance.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions