Skip to content

Commit 0a6dcf2

Browse files
committed
Bump provider version in examples to 3.0.0-alpha1
1 parent f993f91 commit 0a6dcf2

File tree

25 files changed

+542
-18
lines changed

25 files changed

+542
-18
lines changed

Diff for: docs/resources/clickpipe.md

+266
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,266 @@
1+
---
2+
# generated by https://github.com/hashicorp/terraform-plugin-docs
3+
page_title: "clickhouse_clickpipe Resource - clickhouse"
4+
subcategory: ""
5+
description: |-
6+
This experimental resource allows you to create and manage ClickPipes data ingestion in ClickHouse Cloud.
7+
Resource is early access and may change in future releases. Feature coverage might not fully cover all ClickPipe capabilities.
8+
Known limitations:
9+
ClickPipe does not support table updates for managed tables. If you need to update the table schema, you will have to do that externally.
10+
---
11+
12+
# clickhouse_clickpipe (Resource)
13+
14+
This experimental resource allows you to create and manage ClickPipes data ingestion in ClickHouse Cloud.
15+
16+
**Resource is early access and may change in future releases. Feature coverage might not fully cover all ClickPipe capabilities.**
17+
18+
Known limitations:
19+
- ClickPipe does not support table updates for managed tables. If you need to update the table schema, you will have to do that externally.
20+
21+
## Example Usage
22+
23+
```terraform
24+
resource "clickhouse_clickpipe" "kafka_clickpipe" {
25+
name = "My Kafka ClickPipe"
26+
description = "Data pipeline from Kafka to ClickHouse"
27+
28+
service_id = "e9465b4b-f7e5-4937-8e21-8d508b02843d"
29+
30+
scaling {
31+
replicas = 1
32+
}
33+
34+
state = "Running"
35+
36+
source {
37+
kafka {
38+
type = "confluent"
39+
format = "JSONEachRow"
40+
brokers = "my-kafka-broker:9092"
41+
topics = "my_topic"
42+
43+
consumer_group = "clickpipe-test"
44+
45+
credentials {
46+
username = "user"
47+
password = "***"
48+
}
49+
}
50+
}
51+
52+
destination {
53+
table = "my_table"
54+
managed_table = true
55+
56+
tableDefinition {
57+
engine {
58+
type = "MergeTree"
59+
}
60+
}
61+
62+
columns {
63+
name = "my_field1"
64+
type = "String"
65+
}
66+
67+
columns {
68+
name = "my_field2"
69+
type = "UInt64"
70+
}
71+
}
72+
73+
field_mappings = [
74+
{
75+
source_field = "my_field"
76+
destination_field = "my_field1"
77+
}
78+
]
79+
}
80+
```
81+
82+
<!-- schema generated by tfplugindocs -->
83+
## Schema
84+
85+
### Required
86+
87+
- `destination` (Attributes) The destination for the ClickPipe. (see [below for nested schema](#nestedatt--destination))
88+
- `name` (String) The name of the ClickPipe.
89+
- `service_id` (String) The ID of the service to which the ClickPipe belongs.
90+
- `source` (Attributes) The data source for the ClickPipe. At least one source configuration must be provided. (see [below for nested schema](#nestedatt--source))
91+
92+
### Optional
93+
94+
- `description` (String) The description of the ClickPipe.
95+
- `field_mappings` (Attributes List) Field mapping between source and destination table. (see [below for nested schema](#nestedatt--field_mappings))
96+
- `scaling` (Attributes) (see [below for nested schema](#nestedatt--scaling))
97+
- `state` (String) The desired state of the ClickPipe. (`Running`, `Stopped`). Default is `Running`.
98+
99+
### Read-Only
100+
101+
- `id` (String) The ID of the ClickPipe. Generated by the ClickHouse Cloud.
102+
103+
<a id="nestedatt--destination"></a>
104+
### Nested Schema for `destination`
105+
106+
Required:
107+
108+
- `columns` (Attributes List) The list of columns for the ClickHouse table. (see [below for nested schema](#nestedatt--destination--columns))
109+
- `table` (String) The name of the ClickHouse table.
110+
111+
Optional:
112+
113+
- `database` (String) The name of the ClickHouse database. Default is `default`.
114+
- `managed_table` (Boolean) Whether the table is managed by ClickHouse Cloud. If `false`, the table must exist in the database. Default is `true`.
115+
- `roles` (List of String) ClickPipe will create a ClickHouse user with these roles. Add your custom roles here if required.
116+
- `table_definition` (Attributes) Definition of the destination table. Required for ClickPipes managed tables. (see [below for nested schema](#nestedatt--destination--table_definition))
117+
118+
<a id="nestedatt--destination--columns"></a>
119+
### Nested Schema for `destination.columns`
120+
121+
Required:
122+
123+
- `name` (String) The name of the column.
124+
- `type` (String) The type of the column.
125+
126+
127+
<a id="nestedatt--destination--table_definition"></a>
128+
### Nested Schema for `destination.table_definition`
129+
130+
Required:
131+
132+
- `engine` (Attributes) The engine of the ClickHouse table. (see [below for nested schema](#nestedatt--destination--table_definition--engine))
133+
134+
Optional:
135+
136+
- `partition_by` (String) The column to partition the table by.
137+
- `primary_key` (String) The primary key of the table.
138+
- `sorting_key` (List of String) The list of columns for the sorting key.
139+
140+
<a id="nestedatt--destination--table_definition--engine"></a>
141+
### Nested Schema for `destination.table_definition.engine`
142+
143+
Required:
144+
145+
- `type` (String) The type of the engine. Only `MergeTree` is supported.
146+
147+
148+
149+
150+
<a id="nestedatt--source"></a>
151+
### Nested Schema for `source`
152+
153+
Optional:
154+
155+
- `kafka` (Attributes) The Kafka source configuration for the ClickPipe. (see [below for nested schema](#nestedatt--source--kafka))
156+
- `object_storage` (Attributes) The Kafka source configuration for the ClickPipe. (see [below for nested schema](#nestedatt--source--object_storage))
157+
158+
<a id="nestedatt--source--kafka"></a>
159+
### Nested Schema for `source.kafka`
160+
161+
Required:
162+
163+
- `brokers` (String) The list of Kafka bootstrap brokers. (comma separated)
164+
- `format` (String) The format of the Kafka source. (`JSONEachRow`, `Avro`, `AvroConfluent`)
165+
- `topics` (String) The list of Kafka topics. (comma separated)
166+
167+
Optional:
168+
169+
- `authentication` (String) The authentication method for the Kafka source. (`PLAIN`, `SCRAM-SHA-256`, `SCRAM-SHA-512`, `IAM_ROLE`, `IAM_USER`). Default is `PLAIN`.
170+
- `ca_certificate` (String) PEM encoded CA certificates to validate the broker's certificate.
171+
- `consumer_group` (String) Consumer group of the Kafka source. If not provided `clickpipes-<ID>` will be used.
172+
- `credentials` (Attributes) The credentials for the Kafka source. (see [below for nested schema](#nestedatt--source--kafka--credentials))
173+
- `iam_role` (String) The IAM role for the Kafka source. Use with `IAM_ROLE` authentication. It can be used with AWS ClickHouse service only. Read more in [ClickPipes documentation page](https://clickhouse.com/docs/en/integrations/clickpipes/kafka#iam)
174+
- `offset` (Attributes) The Kafka offset. (see [below for nested schema](#nestedatt--source--kafka--offset))
175+
- `reverse_private_endpoint_ids` (List of String) The list of reverse private endpoint IDs for the Kafka source. (comma separated)
176+
- `schema_registry` (Attributes) The schema registry for the Kafka source. (see [below for nested schema](#nestedatt--source--kafka--schema_registry))
177+
- `type` (String) The type of the Kafka source. (`kafka`, `redpanda`, `confluent`, `msk`, `warpstream`, `azureeventhub`). Default is `kafka`.
178+
179+
<a id="nestedatt--source--kafka--credentials"></a>
180+
### Nested Schema for `source.kafka.credentials`
181+
182+
Optional:
183+
184+
- `access_key_id` (String, Sensitive) The access key ID for the Kafka source. Use with `IAM_USER` authentication.
185+
- `connection_string` (String, Sensitive) The connection string for the Kafka source. Use with `azureeventhub` Kafka source type. Use with `PLAIN` authentication.
186+
- `password` (String, Sensitive) The password for the Kafka source.
187+
- `secret_key` (String, Sensitive) The secret key for the Kafka source. Use with `IAM_USER` authentication.
188+
- `username` (String, Sensitive) The username for the Kafka source.
189+
190+
191+
<a id="nestedatt--source--kafka--offset"></a>
192+
### Nested Schema for `source.kafka.offset`
193+
194+
Required:
195+
196+
- `strategy` (String) The offset strategy for the Kafka source. (`from_beginning`, `from_latest`, `from_timestamp`)
197+
198+
Optional:
199+
200+
- `timestamp` (String) The timestamp for the Kafka offset. Use with `from_timestamp` offset strategy. (format `2021-01-01T00:00`)
201+
202+
203+
<a id="nestedatt--source--kafka--schema_registry"></a>
204+
### Nested Schema for `source.kafka.schema_registry`
205+
206+
Required:
207+
208+
- `authentication` (String) The authentication method for the Schema Registry. Only supported is `PLAIN`.
209+
- `credentials` (Attributes) The credentials for the Schema Registry. (see [below for nested schema](#nestedatt--source--kafka--schema_registry--credentials))
210+
- `url` (String) The URL of the schema registry.
211+
212+
<a id="nestedatt--source--kafka--schema_registry--credentials"></a>
213+
### Nested Schema for `source.kafka.schema_registry.credentials`
214+
215+
Required:
216+
217+
- `password` (String, Sensitive) The password for the Schema Registry.
218+
- `username` (String, Sensitive) The username for the Schema Registry.
219+
220+
221+
222+
223+
<a id="nestedatt--source--object_storage"></a>
224+
### Nested Schema for `source.object_storage`
225+
226+
Required:
227+
228+
- `format` (String) The format of the S3 objects. (`JSONEachRow`, `CSV`, `CSVWithNames`, `Parquet`)
229+
- `url` (String) The URL of the S3 bucket. Provide a path to the file(s) you want to ingest. You can specify multiple files using bash-like wildcards. For more information, see the documentation on using wildcards in path: https://clickhouse.com/docs/en/integrations/clickpipes/object-storage#limitations
230+
231+
Optional:
232+
233+
- `access_key` (Attributes) Access key (see [below for nested schema](#nestedatt--source--object_storage--access_key))
234+
- `authentication` (String) Authentication method. If not provided, no authentication is used. It can be used to access public buckets.. (`IAM_ROLE`, `IAM_USER`).
235+
- `compression` (String) Compression algorithm used for the files.. (`auto`, `gzip`, `brotli`, `br`, `xz`, `LZMA`, `zstd`)
236+
- `delimiter` (String) The delimiter for the S3 source. Default is `,`.
237+
- `iam_role` (String) The IAM role for the S3 source. Use with `IAM_ROLE` authentication. It can be used with AWS ClickHouse service only. Read more in [ClickPipes documentation page](https://clickhouse.com/docs/en/integrations/clickpipes/object-storage#authentication)
238+
- `is_continuous` (Boolean) If set to true, the pipe will continuously read new files from the source. If set to false, the pipe will read the files only once. New files have to be uploaded lexically order.
239+
- `type` (String) The type of the S3-compatbile source (`s3`, `gcs`). Default is `s3`.
240+
241+
<a id="nestedatt--source--object_storage--access_key"></a>
242+
### Nested Schema for `source.object_storage.access_key`
243+
244+
Optional:
245+
246+
- `access_key_id` (String, Sensitive) The access key ID for the S3 source. Use with `IAM_USER` authentication.
247+
- `secret_key` (String, Sensitive) The secret key for the S3 source. Use with `IAM_USER` authentication.
248+
249+
250+
251+
252+
<a id="nestedatt--field_mappings"></a>
253+
### Nested Schema for `field_mappings`
254+
255+
Required:
256+
257+
- `destination_field` (String) The name of the column in destination table.
258+
- `source_field` (String) The name of the source field.
259+
260+
261+
<a id="nestedatt--scaling"></a>
262+
### Nested Schema for `scaling`
263+
264+
Optional:
265+
266+
- `replicas` (Number) The number of desired replicas for the ClickPipe. Default is 1. The maximum value is 10.
+66
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
---
2+
# generated by https://github.com/hashicorp/terraform-plugin-docs
3+
page_title: "clickhouse_clickpipes_reverse_private_endpoint Resource - clickhouse"
4+
subcategory: ""
5+
description: |-
6+
This experimental resource allows you to create and manage ClickPipes reverse private endpoints for a secure data source connections in ClickHouse Cloud.
7+
Resource is early access and may change in future releases. Feature coverage might not fully cover all ClickPipe capabilities.
8+
---
9+
10+
# clickhouse_clickpipes_reverse_private_endpoint (Resource)
11+
12+
This experimental resource allows you to create and manage ClickPipes reverse private endpoints for a secure data source connections in ClickHouse Cloud.
13+
14+
**Resource is early access and may change in future releases. Feature coverage might not fully cover all ClickPipe capabilities.**
15+
16+
## Example Usage
17+
18+
```terraform
19+
resource "clickhouse_clickpipes_reverse_private_endpoint" "vpc_endpoint_service" {
20+
service_id = "3a10a385-ced2-452e-abb8-908c80976a8f"
21+
description = "VPC_ENDPOINT_SERVICE reverse private endpoint for ClickPipes"
22+
type = "VPC_ENDPOINT_SERVICE"
23+
vpc_endpoint_service_name = "com.amazonaws.vpce.eu-west-1.vpce-svc-080826a65b5b27d4e"
24+
}
25+
26+
resource "clickhouse_clickpipes_reverse_private_endpoint" "vpc_resource" {
27+
service_id = "3a10a385-ced2-452e-abb8-908c80976a8f"
28+
description = "VPC_RESOURCE reverse private endpoint for ClickPipes"
29+
type = "VPC_RESOURCE"
30+
vpc_resource_configuration_id = "rcfg-1a2b3c4d5e6f7g8h9"
31+
vpc_resource_share_arn = "arn:aws:ram:us-east-1:123456789012:resource-share/1a2b3c4d-5e6f-7g8h-9i0j-k1l2m3n4o5p6"
32+
}
33+
34+
resource "clickhouse_clickpipes_reverse_private_endpoint" "msk_multi_vpc" {
35+
service_id = "3a10a385-ced2-452e-abb8-908c80976a8f"
36+
description = "MSK_MULTI_VPC reverse private endpoint for ClickPipes"
37+
type = "MSK_MULTI_VPC"
38+
msk_cluster_arn = "arn:aws:kafka:us-east-1:123456789012:cluster/ClickHouse-Cluster/1a2b3c4d-5e6f-7g8h-9i0j-k1l2m3n4o5p6-1"
39+
msk_authentication = "SASL_IAM"
40+
}
41+
```
42+
43+
<!-- schema generated by tfplugindocs -->
44+
## Schema
45+
46+
### Required
47+
48+
- `description` (String) Description of the reverse private endpoint
49+
- `service_id` (String) The ID of the ClickHouse service to associate with this reverse private endpoint
50+
- `type` (String) Type of the reverse private endpoint (VPC_ENDPOINT_SERVICE, VPC_RESOURCE, or MSK_MULTI_VPC)
51+
52+
### Optional
53+
54+
- `msk_authentication` (String) MSK cluster authentication type (SASL_IAM or SASL_SCRAM), required for MSK_MULTI_VPC type
55+
- `msk_cluster_arn` (String) MSK cluster ARN, required for MSK_MULTI_VPC type
56+
- `vpc_endpoint_service_name` (String) VPC endpoint service name, required for VPC_ENDPOINT_SERVICE type
57+
- `vpc_resource_configuration_id` (String) VPC resource configuration ID, required for VPC_RESOURCE type
58+
- `vpc_resource_share_arn` (String) VPC resource share ARN, required for VPC_RESOURCE type
59+
60+
### Read-Only
61+
62+
- `dns_names` (List of String) Reverse private endpoint internal DNS names
63+
- `endpoint_id` (String) Reverse private endpoint endpoint ID
64+
- `id` (String) Unique identifier for the reverse private endpoint
65+
- `private_dns_names` (List of String) Reverse private endpoint private DNS names
66+
- `status` (String) Status of the reverse private endpoint

Diff for: docs/resources/database.md

+36
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
---
2+
# generated by https://github.com/hashicorp/terraform-plugin-docs
3+
page_title: "clickhouse_database Resource - clickhouse"
4+
subcategory: ""
5+
description: |-
6+
Use the clickhouse_database resource to create a database in a ClickHouse cloud service.
7+
Attention: in order to use the clickhouse_database resource you need to set the query_api_endpoint attribute in the clickhouse_service.
8+
Please check full example https://github.com/ClickHouse/terraform-provider-clickhouse/blob/main/examples/database/main.tf.
9+
Known limitations:
10+
Changing the comment on a database resource is unsupported and will cause the database to be destroyed and recreated. WARNING: you will lose any content of the database if you do so!
11+
---
12+
13+
# clickhouse_database (Resource)
14+
15+
Use the *clickhouse_database* resource to create a database in a ClickHouse cloud *service*.
16+
17+
Attention: in order to use the `clickhouse_database` resource you need to set the `query_api_endpoint` attribute in the `clickhouse_service`.
18+
Please check [full example](https://github.com/ClickHouse/terraform-provider-clickhouse/blob/main/examples/database/main.tf).
19+
20+
Known limitations:
21+
22+
- Changing the comment on a `database` resource is unsupported and will cause the database to be destroyed and recreated. WARNING: you will lose any content of the database if you do so!
23+
24+
25+
26+
<!-- schema generated by tfplugindocs -->
27+
## Schema
28+
29+
### Required
30+
31+
- `name` (String) Name of the database
32+
- `service_id` (String) ClickHouse Service ID
33+
34+
### Optional
35+
36+
- `comment` (String) Comment associated with the database

0 commit comments

Comments
 (0)