page_title | subcategory | description |
---|---|---|
airbyte_source_file Resource - terraform-provider-airbyte |
SourceFile Resource |
SourceFile Resource
resource "airbyte_source_file" "my_source_file" {
configuration = {
dataset_name = "...my_dataset_name..."
format = "csv"
provider = {
s3_amazon_web_services = {
aws_access_key_id = "...my_aws_access_key_id..."
aws_secret_access_key = "...my_aws_secret_access_key..."
}
}
reader_options = "{}"
url = "https://storage.googleapis.com/covid19-open-data/v2/latest/epidemiology.csv"
}
definition_id = "a86f29c4-a6d3-472d-a3d8-9e8b8db9cd49"
name = "...my_name..."
secret_id = "...my_secret_id..."
workspace_id = "6c152f5f-2668-4edb-bbeb-b6add70adfbc"
}
configuration
(Attributes) (see below for nested schema)name
(String) Name of the source e.g. dev-mysql-instance.workspace_id
(String)
definition_id
(String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided. Requires replacement if changed.secret_id
(String) Optional secretID obtained through the public API OAuth redirect flow. Requires replacement if changed.
created_at
(Number)resource_allocation
(Attributes) actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level. (see below for nested schema)source_id
(String)source_type
(String)
Required:
dataset_name
(String) The Name of the final table to replicate this file into (should include letters, numbers dash and underscores only).provider
(Attributes) The storage Provider or Location of the file(s) which should be replicated. (see below for nested schema)url
(String) The URL path to access the file which should be replicated.
Optional:
format
(String) The Format of the file which should be replicated (Warning: some formats may be experimental, please refer to the docs). Default: "csv"; must be one of ["csv", "json", "jsonl", "excel", "excel_binary", "fwf", "feather", "parquet", "yaml"]reader_options
(String) This should be a string in JSON format. It depends on the chosen file format to provide additional options and tune its behavior.
Optional:
az_blob_azure_blob_storage
(Attributes) (see below for nested schema)gcs_google_cloud_storage
(Attributes) (see below for nested schema)https_public_web
(Attributes) (see below for nested schema)local_filesystem_limited
(Attributes) (see below for nested schema)s3_amazon_web_services
(Attributes) (see below for nested schema)scp_secure_copy_protocol
(Attributes) (see below for nested schema)sftp_secure_file_transfer_protocol
(Attributes) (see below for nested schema)ssh_secure_shell
(Attributes) (see below for nested schema)
Required:
storage_account
(String) The globally unique name of the storage account that the desired blob sits within. See here for more details.
Optional:
sas_token
(String, Sensitive) To access Azure Blob Storage, this connector would need credentials with the proper permissions. One option is a SAS (Shared Access Signature) token. If accessing publicly available data, this field is not necessary.shared_key
(String, Sensitive) To access Azure Blob Storage, this connector would need credentials with the proper permissions. One option is a storage account shared key (aka account key or access key). If accessing publicly available data, this field is not necessary.
Optional:
service_account_json
(String, Sensitive) In order to access private Buckets stored on Google Cloud, this connector would need a service account json credentials with the proper permissions as described here. Please generate the credentials.json file and copy/paste its content to this field (expecting JSON formats). If accessing publicly available data, this field is not necessary.
Optional:
user_agent
(Boolean) Add User-Agent to request. Default: false
Optional:
aws_access_key_id
(String) In order to access private Buckets stored on AWS S3, this connector would need credentials with the proper permissions. If accessing publicly available data, this field is not necessary.aws_secret_access_key
(String, Sensitive) In order to access private Buckets stored on AWS S3, this connector would need credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
Required:
host
(String)user
(String)
Optional:
password
(String, Sensitive)port
(String) Default: "22"
Required:
host
(String)user
(String)
Optional:
password
(String, Sensitive)port
(String) Default: "22"
Required:
host
(String)user
(String)
Optional:
password
(String, Sensitive)port
(String) Default: "22"
Read-Only:
default
(Attributes) optional resource requirements to run workers (blank for unbounded allocations) (see below for nested schema)job_specific
(Attributes List) (see below for nested schema)
Read-Only:
cpu_limit
(String)cpu_request
(String)ephemeral_storage_limit
(String)ephemeral_storage_request
(String)memory_limit
(String)memory_request
(String)
Read-Only:
job_type
(String) enum that describes the different types of jobs that the platform runs. must be one of ["get_spec", "check_connection", "discover_schema", "sync", "reset_connection", "connection_updater", "replicate"]resource_requirements
(Attributes) optional resource requirements to run workers (blank for unbounded allocations) (see below for nested schema)
Read-Only:
cpu_limit
(String)cpu_request
(String)ephemeral_storage_limit
(String)ephemeral_storage_request
(String)memory_limit
(String)memory_request
(String)
Import is supported using the following syntax:
terraform import airbyte_source_file.my_airbyte_source_file ""