Skip to content

Latest commit

 

History

History
244 lines (165 loc) · 11.2 KB

destination_iceberg.md

File metadata and controls

244 lines (165 loc) · 11.2 KB
page_title subcategory description
airbyte_destination_iceberg Resource - terraform-provider-airbyte
DestinationIceberg Resource

airbyte_destination_iceberg (Resource)

DestinationIceberg Resource

Example Usage

resource "airbyte_destination_iceberg" "my_destination_iceberg" {
  configuration = {
    catalog_config = {
      glue_catalog = {
        catalog_type = "Glue"
        database     = "public"
      }
      hadoop_catalog_use_hierarchical_file_systems_as_same_as_storage_config = {
        catalog_type = "Hadoop"
        database     = "default"
      }
    }
    format_config = {
      auto_compact                   = true
      compact_target_file_size_in_mb = 9
      flush_batch_size               = 8
      format                         = "Parquet"
    }
    storage_config = {
      server_managed = {
        managed_warehouse_name = "...my_managed_warehouse_name..."
        storage_type           = "MANAGED"
      }
    }
  }
  definition_id = "263446c4-43e9-45cc-ac60-4398823f5d7f"
  name          = "...my_name..."
  workspace_id  = "a348c0e2-12a2-4320-9af6-f59e32031847"
}

Schema

Required

  • configuration (Attributes) (see below for nested schema)
  • name (String) Name of the destination e.g. dev-mysql-instance.
  • workspace_id (String)

Optional

  • definition_id (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.

Read-Only

  • created_at (Number)
  • destination_id (String)
  • destination_type (String)
  • resource_allocation (Attributes) actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level. (see below for nested schema)

Nested Schema for configuration

Required:

Nested Schema for configuration.catalog_config

Optional:

  • glue_catalog (Attributes) The GlueCatalog connects to a AWS Glue Catalog (see below for nested schema)
  • hadoop_catalog_use_hierarchical_file_systems_as_same_as_storage_config (Attributes) A Hadoop catalog doesn’t need to connect to a Hive MetaStore, but can only be used with HDFS or similar file systems that support atomic rename. (see below for nested schema)
  • hive_catalog_use_apache_hive_meta_store (Attributes) (see below for nested schema)
  • jdbc_catalog_use_relational_database (Attributes) Using a table in a relational database to manage Iceberg tables through JDBC. Read more here. Supporting: PostgreSQL (see below for nested schema)
  • rest_catalog (Attributes) The RESTCatalog connects to a REST server at the specified URI (see below for nested schema)

Nested Schema for configuration.catalog_config.glue_catalog

Optional:

  • catalog_type (String) Default: "Glue"; must be "Glue"
  • database (String) The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"

Nested Schema for configuration.catalog_config.hadoop_catalog_use_hierarchical_file_systems_as_same_as_storage_config

Optional:

  • catalog_type (String) Default: "Hadoop"; must be "Hadoop"
  • database (String) The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"

Nested Schema for configuration.catalog_config.hive_catalog_use_apache_hive_meta_store

Required:

  • hive_thrift_uri (String) Hive MetaStore thrift server uri of iceberg catalog.

Optional:

  • catalog_type (String) Default: "Hive"; must be "Hive"
  • database (String) The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"

Nested Schema for configuration.catalog_config.jdbc_catalog_use_relational_database

Optional:

  • catalog_schema (String) Iceberg catalog metadata tables are written to catalog schema. The usual value for this field is "public". Default: "public"
  • catalog_type (String) Default: "Jdbc"; must be "Jdbc"
  • database (String) The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
  • jdbc_url (String)
  • password (String, Sensitive) Password associated with the username.
  • ssl (Boolean) Encrypt data using SSL. When activating SSL, please select one of the connection modes. Default: false
  • username (String) Username to use to access the database.

Nested Schema for configuration.catalog_config.rest_catalog

Required:

  • rest_uri (String)

Optional:

  • catalog_type (String) Default: "Rest"; must be "Rest"
  • rest_credential (String, Sensitive)
  • rest_token (String, Sensitive)

Nested Schema for configuration.format_config

Optional:

  • auto_compact (Boolean) Auto compact data files when stream close. Default: false
  • compact_target_file_size_in_mb (Number) Specify the target size of Iceberg data file when performing a compaction action. Default: 100
  • flush_batch_size (Number) Iceberg data file flush batch size. Incoming rows write to cache firstly; When cache size reaches this 'batch size', flush into real Iceberg data file. Default: 10000
  • format (String) Default: "Parquet"; must be one of ["Parquet", "Avro"]

Nested Schema for configuration.storage_config

Optional:

Nested Schema for configuration.storage_config.s3

Required:

  • access_key_id (String, Sensitive) The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more here.
  • s3_warehouse_uri (String) The Warehouse Uri for Iceberg
  • secret_access_key (String, Sensitive) The corresponding secret to the access key ID. Read more here

Optional:

  • s3_bucket_region (String) The region of the S3 bucket. See here for all region codes. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
  • s3_endpoint (String) Your S3 endpoint url. Read more here. Default: ""
  • s3_path_style_access (Boolean) Use path style access. Default: true
  • storage_type (String) Default: "S3"; must be "S3"

Nested Schema for configuration.storage_config.server_managed

Required:

  • managed_warehouse_name (String) The name of the managed warehouse

Optional:

  • storage_type (String) Default: "MANAGED"; must be "MANAGED"

Nested Schema for resource_allocation

Read-Only:

Nested Schema for resource_allocation.default

Read-Only:

  • cpu_limit (String)
  • cpu_request (String)
  • ephemeral_storage_limit (String)
  • ephemeral_storage_request (String)
  • memory_limit (String)
  • memory_request (String)

Nested Schema for resource_allocation.job_specific

Read-Only:

  • job_type (String) enum that describes the different types of jobs that the platform runs. must be one of ["get_spec", "check_connection", "discover_schema", "sync", "reset_connection", "connection_updater", "replicate"]
  • resource_requirements (Attributes) optional resource requirements to run workers (blank for unbounded allocations) (see below for nested schema)

Nested Schema for resource_allocation.job_specific.resource_requirements

Read-Only:

  • cpu_limit (String)
  • cpu_request (String)
  • ephemeral_storage_limit (String)
  • ephemeral_storage_request (String)
  • memory_limit (String)
  • memory_request (String)

Import

Import is supported using the following syntax:

terraform import airbyte_destination_iceberg.my_airbyte_destination_iceberg ""