page_title | subcategory | description |
---|---|---|
airbyte_destination_s3 Resource - terraform-provider-airbyte |
DestinationS3 Resource |
DestinationS3 Resource
resource "airbyte_destination_s3" "my_destination_s3" {
configuration = {
access_key_id = "A012345678910EXAMPLE"
file_name_pattern = "{date}"
format = {
avro_apache_avro = {
additional_properties = "{ \"see\": \"documentation\" }"
compression_codec = {
bzip2 = {
additional_properties = "{ \"see\": \"documentation\" }"
codec = "bzip2"
}
no_compression = {
additional_properties = "{ \"see\": \"documentation\" }"
codec = "no compression"
}
snappy = {
additional_properties = "{ \"see\": \"documentation\" }"
codec = "snappy"
}
}
format_type = "Avro"
}
}
role_arn = "arn:aws:iam::123456789:role/ExternalIdIsYourWorkspaceId"
s3_bucket_name = "airbyte_sync"
s3_bucket_path = "data_sync/test"
s3_bucket_region = "us-east-1"
s3_endpoint = "http://localhost:9000"
s3_path_format = "${NAMESPACE}/${STREAM_NAME}/${YEAR}_${MONTH}_${DAY}_${EPOCH}_"
secret_access_key = "a012345678910ABCDEFGH/AbCdEfGhEXAMPLEKEY"
}
definition_id = "78e0a8ec-be25-40bf-b8ba-093bfe7a6f05"
name = "...my_name..."
workspace_id = "9842b6c1-e43f-4d6f-90dd-f293538933f0"
}
configuration
(Attributes) (see below for nested schema)name
(String) Name of the destination e.g. dev-mysql-instance.workspace_id
(String)
definition_id
(String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
created_at
(Number)destination_id
(String)destination_type
(String)resource_allocation
(Attributes) actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level. (see below for nested schema)
Required:
format
(Attributes) Format of the data output. See here for more details (see below for nested schema)s3_bucket_name
(String) The name of the S3 bucket. Read more here.s3_bucket_path
(String) Directory under the S3 bucket where data will be written. Read more here
Optional:
access_key_id
(String, Sensitive) The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more here.file_name_pattern
(String) Pattern to match file names in the bucket directory. Read more hererole_arn
(String) The ARN of the AWS role to assume. Only usable in Airbyte Cloud.s3_bucket_region
(String) The region of the S3 bucket. See here for all region codes. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]s3_endpoint
(String) Your S3 endpoint url. Read more heres3_path_format
(String) Format string on how data will be organized inside the bucket directory. Read more heresecret_access_key
(String, Sensitive) The corresponding secret to the access key ID. Read more here
Optional:
avro_apache_avro
(Attributes) (see below for nested schema)csv_comma_separated_values
(Attributes) (see below for nested schema)json_lines_newline_delimited_json
(Attributes) (see below for nested schema)parquet_columnar_storage
(Attributes) (see below for nested schema)
Required:
compression_codec
(Attributes) The compression algorithm used to compress data. Default to no compression. (see below for nested schema)
Optional:
additional_properties
(String) Parsed as JSON.format_type
(String) Default: "Avro"; must be "Avro"
Optional:
bzip2
(Attributes) (see below for nested schema)deflate
(Attributes) (see below for nested schema)no_compression
(Attributes) (see below for nested schema)snappy
(Attributes) (see below for nested schema)xz
(Attributes) (see below for nested schema)zstandard
(Attributes) (see below for nested schema)
Optional:
additional_properties
(String) Parsed as JSON.codec
(String) Default: "bzip2"; must be "bzip2"
Required:
compression_level
(Number)
Optional:
additional_properties
(String) Parsed as JSON.codec
(String) Default: "Deflate"; must be "Deflate"
Optional:
additional_properties
(String) Parsed as JSON.codec
(String) Default: "no compression"; must be "no compression"
Optional:
additional_properties
(String) Parsed as JSON.codec
(String) Default: "snappy"; must be "snappy"
Required:
compression_level
(Number)
Optional:
additional_properties
(String) Parsed as JSON.codec
(String) Default: "xz"; must be "xz"
Required:
compression_level
(Number)include_checksum
(Boolean)
Optional:
additional_properties
(String) Parsed as JSON.codec
(String) Default: "zstandard"; must be "zstandard"
Optional:
additional_properties
(String) Parsed as JSON.compression
(Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see below for nested schema)flattening
(String) Default: "No flattening"; must be one of ["No flattening", "Root level flattening"]format_type
(String) Default: "CSV"; must be "CSV"
Optional:
gzip
(Attributes) (see below for nested schema)no_compression
(Attributes) (see below for nested schema)
Optional:
additional_properties
(String) Parsed as JSON.compression_type
(String) Default: "GZIP"; must be "GZIP"
Optional:
additional_properties
(String) Parsed as JSON.compression_type
(String) Default: "No Compression"; must be "No Compression"
Optional:
additional_properties
(String) Parsed as JSON.compression
(Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see below for nested schema)flattening
(String) Default: "No flattening"; must be one of ["No flattening", "Root level flattening"]format_type
(String) Default: "JSONL"; must be "JSONL"
Optional:
gzip
(Attributes) (see below for nested schema)no_compression
(Attributes) (see below for nested schema)
Optional:
additional_properties
(String) Parsed as JSON.compression_type
(String) Default: "GZIP"; must be "GZIP"
Optional:
additional_properties
(String) Parsed as JSON.compression_type
(String) Default: "No Compression"; must be "No Compression"
Optional:
additional_properties
(String) Parsed as JSON.block_size_mb
(Number) This is the size of a row group being buffered in memory. It limits the memory usage when writing. Larger values will improve the IO when reading, but consume more memory when writing. Default: 128 MB. Default: 128compression_codec
(String) The compression algorithm used to compress data pages. Default: "UNCOMPRESSED"; must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "LZO", "BROTLI", "LZ4", "ZSTD"]dictionary_encoding
(Boolean) Default: true.dictionary_page_size_kb
(Number) There is one dictionary page per column per row group when dictionary encoding is used. The dictionary page size works like the page size but for dictionary. Default: 1024 KB. Default: 1024format_type
(String) Default: "Parquet"; must be "Parquet"max_padding_size_mb
(Number) Maximum size allowed as padding to align row groups. This is also the minimum size of a row group. Default: 8 MB. Default: 8page_size_kb
(Number) The page size is for compression. A block is composed of pages. A page is the smallest unit that must be read fully to access a single record. If this value is too small, the compression will deteriorate. Default: 1024 KB. Default: 1024
Read-Only:
default
(Attributes) optional resource requirements to run workers (blank for unbounded allocations) (see below for nested schema)job_specific
(Attributes List) (see below for nested schema)
Read-Only:
cpu_limit
(String)cpu_request
(String)ephemeral_storage_limit
(String)ephemeral_storage_request
(String)memory_limit
(String)memory_request
(String)
Read-Only:
job_type
(String) enum that describes the different types of jobs that the platform runs. must be one of ["get_spec", "check_connection", "discover_schema", "sync", "reset_connection", "connection_updater", "replicate"]resource_requirements
(Attributes) optional resource requirements to run workers (blank for unbounded allocations) (see below for nested schema)
Read-Only:
cpu_limit
(String)cpu_request
(String)ephemeral_storage_limit
(String)ephemeral_storage_request
(String)memory_limit
(String)memory_request
(String)
Import is supported using the following syntax:
terraform import airbyte_destination_s3.my_airbyte_destination_s3 ""