Skip to content

Latest commit

 

History

History
168 lines (115 loc) · 9.81 KB

destination_bigquery.md

File metadata and controls

168 lines (115 loc) · 9.81 KB
page_title subcategory description
airbyte_destination_bigquery Resource - terraform-provider-airbyte
DestinationBigquery Resource

airbyte_destination_bigquery (Resource)

DestinationBigquery Resource

Example Usage

resource "airbyte_destination_bigquery" "my_destination_bigquery" {
  configuration = {
    big_query_client_buffer_size_mb = 15
    credentials_json                = "...my_credentials_json..."
    dataset_id                      = "...my_dataset_id..."
    dataset_location                = "US"
    disable_type_dedupe             = true
    loading_method = {
      batched_standard_inserts = {
        # ...
      }
    }
    project_id              = "...my_project_id..."
    raw_data_dataset        = "...my_raw_data_dataset..."
    transformation_priority = "interactive"
  }
  definition_id = "92c3eb2b-6d61-4610-adf2-eee065419ed9"
  name          = "...my_name..."
  workspace_id  = "acee73dd-54d3-476f-a8ea-d39d218f52cd"
}

Schema

Required

  • configuration (Attributes) (see below for nested schema)
  • name (String) Name of the destination e.g. dev-mysql-instance.
  • workspace_id (String)

Optional

  • definition_id (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.

Read-Only

  • created_at (Number)
  • destination_id (String)
  • destination_type (String)
  • resource_allocation (Attributes) actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level. (see below for nested schema)

Nested Schema for configuration

Required:

  • dataset_id (String) The default BigQuery Dataset ID that tables are replicated to if the source does not specify a namespace. Read more here.
  • dataset_location (String) The location of the dataset. Warning: Changes made after creation will not be applied. Read more here. must be one of ["US", "EU", "asia-east1", "asia-east2", "asia-northeast1", "asia-northeast2", "asia-northeast3", "asia-south1", "asia-south2", "asia-southeast1", "asia-southeast2", "australia-southeast1", "australia-southeast2", "europe-central1", "europe-central2", "europe-north1", "europe-southwest1", "europe-west1", "europe-west2", "europe-west3", "europe-west4", "europe-west6", "europe-west7", "europe-west8", "europe-west9", "europe-west12", "me-central1", "me-central2", "me-west1", "northamerica-northeast1", "northamerica-northeast2", "southamerica-east1", "southamerica-west1", "us-central1", "us-east1", "us-east2", "us-east3", "us-east4", "us-east5", "us-south1", "us-west1", "us-west2", "us-west3", "us-west4"]
  • project_id (String) The GCP project ID for the project containing the target BigQuery dataset. Read more here.

Optional:

  • big_query_client_buffer_size_mb (Number) Google BigQuery client's chunk (buffer) size (MIN=1, MAX = 15) for each table. The size that will be written by a single RPC. Written data will be buffered and only flushed upon reaching this size or closing the channel. The default 15MB value is used if not set explicitly. Read more here. Default: 15
  • credentials_json (String, Sensitive) The contents of the JSON service account key. Check out the docs if you need help generating this key. Default credentials will be used if this field is left empty.
  • disable_type_dedupe (Boolean) Disable Writing Final Tables. WARNING! The data format in _airbyte_data is likely stable but there are no guarantees that other metadata columns will remain the same in future versions. Default: false
  • loading_method (Attributes) The way data will be uploaded to BigQuery. (see below for nested schema)
  • raw_data_dataset (String) The dataset to write raw tables into (default: airbyte_internal)
  • transformation_priority (String) Interactive run type means that the query is executed as soon as possible, and these queries count towards concurrent rate limit and daily limit. Read more about interactive run type here. Batch queries are queued and started as soon as idle resources are available in the BigQuery shared resource pool, which usually occurs within a few minutes. Batch queries don’t count towards your concurrent rate limit. Read more about batch queries here. The default "interactive" value is used if not set explicitly. Default: "interactive"; must be one of ["interactive", "batch"]

Nested Schema for configuration.loading_method

Optional:

  • batched_standard_inserts (Attributes) Direct loading using batched SQL INSERT statements. This method uses the BigQuery driver to convert large INSERT statements into file uploads automatically. (see below for nested schema)
  • gcs_staging (Attributes) Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO to load your data into BigQuery. (see below for nested schema)

Nested Schema for configuration.loading_method.batched_standard_inserts

Nested Schema for configuration.loading_method.gcs_staging

Required:

  • credential (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see below for nested schema)
  • gcs_bucket_name (String) The name of the GCS bucket. Read more here.
  • gcs_bucket_path (String) Directory under the GCS bucket where data will be written.

Optional:

  • keep_files_in_gcs_bucket (String) This upload method is supposed to temporary store records in GCS bucket. By this select you can chose if these records should be removed from GCS when migration has finished. The default "Delete all tmp files from GCS" value is used if not set explicitly. Default: "Delete all tmp files from GCS"; must be one of ["Delete all tmp files from GCS", "Keep all tmp files in GCS"]

Nested Schema for configuration.loading_method.gcs_staging.credential

Optional:

Nested Schema for configuration.loading_method.gcs_staging.credential.hmac_key

Required:

  • hmac_key_access_id (String, Sensitive) HMAC key access ID. When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long.
  • hmac_key_secret (String, Sensitive) The corresponding secret for the access ID. It is a 40-character base-64 encoded string.

Nested Schema for resource_allocation

Read-Only:

Nested Schema for resource_allocation.default

Read-Only:

  • cpu_limit (String)
  • cpu_request (String)
  • ephemeral_storage_limit (String)
  • ephemeral_storage_request (String)
  • memory_limit (String)
  • memory_request (String)

Nested Schema for resource_allocation.job_specific

Read-Only:

  • job_type (String) enum that describes the different types of jobs that the platform runs. must be one of ["get_spec", "check_connection", "discover_schema", "sync", "reset_connection", "connection_updater", "replicate"]
  • resource_requirements (Attributes) optional resource requirements to run workers (blank for unbounded allocations) (see below for nested schema)

Nested Schema for resource_allocation.job_specific.resource_requirements

Read-Only:

  • cpu_limit (String)
  • cpu_request (String)
  • ephemeral_storage_limit (String)
  • ephemeral_storage_request (String)
  • memory_limit (String)
  • memory_request (String)

Import

Import is supported using the following syntax:

terraform import airbyte_destination_bigquery.my_airbyte_destination_bigquery ""