Skip to content

Latest commit

 

History

History
210 lines (149 loc) · 10.5 KB

destination_redshift.md

File metadata and controls

210 lines (149 loc) · 10.5 KB
page_title subcategory description
airbyte_destination_redshift Resource - terraform-provider-airbyte
DestinationRedshift Resource

airbyte_destination_redshift (Resource)

DestinationRedshift Resource

Example Usage

resource "airbyte_destination_redshift" "my_destination_redshift" {
  configuration = {
    database            = "...my_database..."
    disable_type_dedupe = false
    drop_cascade        = false
    host                = "...my_host..."
    jdbc_url_params     = "...my_jdbc_url_params..."
    password            = "...my_password..."
    port                = 5439
    raw_data_schema     = "...my_raw_data_schema..."
    schema              = "public"
    tunnel_method = {
      ssh_key_authentication = {
        ssh_key     = "...my_ssh_key..."
        tunnel_host = "...my_tunnel_host..."
        tunnel_port = 22
        tunnel_user = "...my_tunnel_user..."
      }
    }
    uploading_method = {
      awss3_staging = {
        access_key_id      = "...my_access_key_id..."
        file_name_pattern  = "{date}"
        purge_staging_data = false
        s3_bucket_name     = "airbyte.staging"
        s3_bucket_path     = "data_sync/test"
        s3_bucket_region   = "eu-west-2"
        secret_access_key  = "...my_secret_access_key..."
      }
    }
    username = "...my_username..."
  }
  definition_id = "50bfb2e7-1ca1-4132-b623-8606f328175d"
  name          = "...my_name..."
  workspace_id  = "e25c2049-8986-4945-a3f6-604de181966d"
}

Schema

Required

  • configuration (Attributes) (see below for nested schema)
  • name (String) Name of the destination e.g. dev-mysql-instance.
  • workspace_id (String)

Optional

  • definition_id (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.

Read-Only

  • created_at (Number)
  • destination_id (String)
  • destination_type (String)
  • resource_allocation (Attributes) actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level. (see below for nested schema)

Nested Schema for configuration

Required:

  • database (String) Name of the database.
  • host (String) Host Endpoint of the Redshift Cluster (must include the cluster-id, region and end with .redshift.amazonaws.com)
  • password (String, Sensitive) Password associated with the username.
  • username (String) Username to use to access the database.

Optional:

  • disable_type_dedupe (Boolean) Disable Writing Final Tables. WARNING! The data format in _airbyte_data is likely stable but there are no guarantees that other metadata columns will remain the same in future versions. Default: false
  • drop_cascade (Boolean) Drop tables with CASCADE. WARNING! This will delete all data in all dependent objects (views, etc.). Use with caution. This option is intended for usecases which can easily rebuild the dependent objects. Default: false
  • jdbc_url_params (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
  • port (Number) Port of the database. Default: 5439
  • raw_data_schema (String) The schema to write raw tables into (default: airbyte_internal).
  • schema (String) The default schema tables are written to if the source does not specify a namespace. Unless specifically configured, the usual value for this field is "public". Default: "public"
  • tunnel_method (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see below for nested schema)
  • uploading_method (Attributes) The way data will be uploaded to Redshift. (see below for nested schema)

Nested Schema for configuration.tunnel_method

Optional:

Nested Schema for configuration.tunnel_method.no_tunnel

Nested Schema for configuration.tunnel_method.password_authentication

Required:

  • tunnel_host (String) Hostname of the jump server host that allows inbound ssh tunnel.
  • tunnel_user (String) OS-level username for logging into the jump server host
  • tunnel_user_password (String, Sensitive) OS-level password for logging into the jump server host

Optional:

  • tunnel_port (Number) Port on the proxy/jump server that accepts inbound ssh connections. Default: 22

Nested Schema for configuration.tunnel_method.ssh_key_authentication

Required:

  • ssh_key (String, Sensitive) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
  • tunnel_host (String) Hostname of the jump server host that allows inbound ssh tunnel.
  • tunnel_user (String) OS-level username for logging into the jump server host.

Optional:

  • tunnel_port (Number) Port on the proxy/jump server that accepts inbound ssh connections. Default: 22

Nested Schema for configuration.uploading_method

Optional:

  • awss3_staging (Attributes) (recommended) Uploads data to S3 and then uses a COPY to insert the data into Redshift. COPY is recommended for production workloads for better speed and scalability. See AWS docs for more details. (see below for nested schema)

Nested Schema for configuration.uploading_method.awss3_staging

Required:

  • access_key_id (String, Sensitive) This ID grants access to the above S3 staging bucket. Airbyte requires Read and Write permissions to the given bucket. See AWS docs on how to generate an access key ID and secret access key.
  • s3_bucket_name (String) The name of the staging S3 bucket.
  • secret_access_key (String, Sensitive) The corresponding secret to the above access key id. See AWS docs on how to generate an access key ID and secret access key.

Optional:

  • file_name_pattern (String) The pattern allows you to set the file-name format for the S3 staging file(s)
  • purge_staging_data (Boolean) Whether to delete the staging files from S3 after completing the sync. See docs for details. Default: true
  • s3_bucket_path (String) The directory under the S3 bucket where data will be written. If not provided, then defaults to the root directory. See path's name recommendations for more details.
  • s3_bucket_region (String) The region of the S3 staging bucket. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]

Nested Schema for resource_allocation

Read-Only:

Nested Schema for resource_allocation.default

Read-Only:

  • cpu_limit (String)
  • cpu_request (String)
  • ephemeral_storage_limit (String)
  • ephemeral_storage_request (String)
  • memory_limit (String)
  • memory_request (String)

Nested Schema for resource_allocation.job_specific

Read-Only:

  • job_type (String) enum that describes the different types of jobs that the platform runs. must be one of ["get_spec", "check_connection", "discover_schema", "sync", "reset_connection", "connection_updater", "replicate"]
  • resource_requirements (Attributes) optional resource requirements to run workers (blank for unbounded allocations) (see below for nested schema)

Nested Schema for resource_allocation.job_specific.resource_requirements

Read-Only:

  • cpu_limit (String)
  • cpu_request (String)
  • ephemeral_storage_limit (String)
  • ephemeral_storage_request (String)
  • memory_limit (String)
  • memory_request (String)

Import

Import is supported using the following syntax:

terraform import airbyte_destination_redshift.my_airbyte_destination_redshift ""