page_title | subcategory | description |
---|---|---|
airbyte_destination_s3_data_lake Resource - terraform-provider-airbyte |
DestinationS3DataLake Resource |
DestinationS3DataLake Resource
resource "airbyte_destination_s3_data_lake" "my_destination_s3datalake" {
configuration = {
access_key_id = "...my_access_key_id..."
catalog_type = {
nessie_catalog = {
access_token = "a012345678910ABCDEFGH/AbCdEfGhEXAMPLEKEY"
additional_properties = "{ \"see\": \"documentation\" }"
catalog_type = "NESSIE"
namespace = "...my_namespace..."
server_uri = "...my_server_uri..."
}
}
main_branch_name = "...my_main_branch_name..."
s3_bucket_name = "...my_s3_bucket_name..."
s3_bucket_region = "us-east-1"
s3_endpoint = "...my_s3_endpoint..."
secret_access_key = "...my_secret_access_key..."
warehouse_location = "s3://your-bucket/path/to/store/files/in"
}
definition_id = "9e400343-02b2-4662-a6e6-e0fa14a75ce6"
name = "...my_name..."
workspace_id = "a727b820-bb79-42a8-8bb5-a8f3c4e0696b"
}
configuration
(Attributes) Defines the configurations required to connect to an Iceberg catalog, including warehouse location, main branch name, and catalog type specifics. (see below for nested schema)name
(String) Name of the destination e.g. dev-mysql-instance.workspace_id
(String)
definition_id
(String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
created_at
(Number)destination_id
(String)destination_type
(String)resource_allocation
(Attributes) actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level. (see below for nested schema)
Required:
catalog_type
(Attributes) Specifies the type of Iceberg catalog (e.g., NESSIE, GLUE, REST) and its associated configuration. (see below for nested schema)s3_bucket_name
(String) The name of the S3 bucket that will host the Iceberg data.s3_bucket_region
(String) The region of the S3 bucket. See here for all region codes. must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]warehouse_location
(String) The root location of the data warehouse used by the Iceberg catalog. Typically includes a bucket name and path within that bucket. For AWS Glue and Nessie, must include the storage protocol (such as "s3://" for Amazon S3).
Optional:
access_key_id
(String, Sensitive) The AWS Access Key ID with permissions for S3 and Glue operations.main_branch_name
(String) The primary or default branch name in the catalog. Most query engines will use "main" by default. See Iceberg documentation for more information. Default: "main"s3_endpoint
(String) Your S3 endpoint url. Read more heresecret_access_key
(String, Sensitive) The AWS Secret Access Key paired with the Access Key ID for AWS authentication.
Optional:
glue_catalog
(Attributes) Configuration details for connecting to an AWS Glue-based Iceberg catalog. (see below for nested schema)nessie_catalog
(Attributes) Configuration details for connecting to a Nessie-based Iceberg catalog. (see below for nested schema)rest_catalog
(Attributes) Configuration details for connecting to a REST catalog. (see below for nested schema)
Required:
database_name
(String) The Glue database name. This will ONLY be used if theDestination Namespace
setting for the connection is set toDestination-defined
orSource-defined
glue_id
(String) The AWS Account ID associated with the Glue service used by the Iceberg catalog.
Optional:
additional_properties
(String) Parsed as JSON.catalog_type
(String) Default: "GLUE"; must be "GLUE"role_arn
(String) The ARN of the AWS role to assume. Only usable in Airbyte Cloud.
Required:
namespace
(String) The Nessie namespace to be used in the Table identifier. This will ONLY be used if theDestination Namespace
setting for the connection is set toDestination-defined
orSource-defined
server_uri
(String) The base URL of the Nessie server used to connect to the Nessie catalog.
Optional:
access_token
(String, Sensitive) Optional token for authentication with the Nessie server.additional_properties
(String) Parsed as JSON.catalog_type
(String) Default: "NESSIE"; must be "NESSIE"
Required:
namespace
(String) The namespace to be used in the Table identifier. This will ONLY be used if theDestination Namespace
setting for the connection is set toDestination-defined
orSource-defined
server_uri
(String) The base URL of the Rest server used to connect to the Rest catalog.
Optional:
additional_properties
(String) Parsed as JSON.catalog_type
(String) Default: "REST"; must be "REST"
Read-Only:
default
(Attributes) optional resource requirements to run workers (blank for unbounded allocations) (see below for nested schema)job_specific
(Attributes List) (see below for nested schema)
Read-Only:
cpu_limit
(String)cpu_request
(String)ephemeral_storage_limit
(String)ephemeral_storage_request
(String)memory_limit
(String)memory_request
(String)
Read-Only:
job_type
(String) enum that describes the different types of jobs that the platform runs. must be one of ["get_spec", "check_connection", "discover_schema", "sync", "reset_connection", "connection_updater", "replicate"]resource_requirements
(Attributes) optional resource requirements to run workers (blank for unbounded allocations) (see below for nested schema)
Read-Only:
cpu_limit
(String)cpu_request
(String)ephemeral_storage_limit
(String)ephemeral_storage_request
(String)memory_limit
(String)memory_request
(String)
Import is supported using the following syntax:
terraform import airbyte_destination_s3_data_lake.my_airbyte_destination_s3_data_lake ""