page_title | subcategory | description |
---|---|---|
airbyte_destination_pgvector Resource - terraform-provider-airbyte |
DestinationPgvector Resource |
DestinationPgvector Resource
resource "airbyte_destination_pgvector" "my_destination_pgvector" {
configuration = {
embedding = {
open_ai = {
openai_key = "...my_openai_key..."
}
}
indexing = {
credentials = {
password = "AIRBYTE_PASSWORD"
}
database = "AIRBYTE_DATABASE"
default_schema = "AIRBYTE_SCHEMA"
host = "AIRBYTE_ACCOUNT"
port = 5432
username = "AIRBYTE_USER"
}
omit_raw_text = true
processing = {
chunk_overlap = 7
chunk_size = 8035
field_name_mappings = [
{
from_field = "...my_from_field..."
to_field = "...my_to_field..."
}
]
metadata_fields = [
"..."
]
text_fields = [
"..."
]
text_splitter = {
by_programming_language = {
language = "js"
}
}
}
}
definition_id = "ace91495-b654-40da-a8bd-73a5b3a4b3ee"
name = "...my_name..."
workspace_id = "0b8f211f-70ad-47f2-a6ea-1e915e8005be"
}
configuration
(Attributes) The configuration model for the Vector DB based destinations. This model is used to generate the UI for the destination configuration, as well as to provide type safety for the configuration passed to the destination.
The configuration model is composed of four parts:
- Processing configuration
- Embedding configuration
- Indexing configuration
- Advanced configuration
Processing, embedding and advanced configuration are provided by this base class, while the indexing configuration is provided by the destination connector in the sub class. (see below for nested schema)
name
(String) Name of the destination e.g. dev-mysql-instance.workspace_id
(String)
definition_id
(String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
created_at
(Number)destination_id
(String)destination_type
(String)resource_allocation
(Attributes) actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level. (see below for nested schema)
Required:
embedding
(Attributes) Embedding configuration (see below for nested schema)indexing
(Attributes) Postgres can be used to store vector data and retrieve embeddings. (see below for nested schema)processing
(Attributes) (see below for nested schema)
Optional:
omit_raw_text
(Boolean) Do not store the text that gets embedded along with the vector and the metadata in the destination. If set to true, only the vector and the metadata will be stored - in this case raw text for LLM use cases needs to be retrieved from another source. Default: false
Optional:
azure_open_ai
(Attributes) Use the Azure-hosted OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see below for nested schema)cohere
(Attributes) Use the Cohere API to embed text. (see below for nested schema)fake
(Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see below for nested schema)open_ai
(Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see below for nested schema)open_ai_compatible
(Attributes) Use a service that's compatible with the OpenAI API to embed text. (see below for nested schema)
Required:
api_base
(String) The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resourcedeployment
(String) The deployment for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resourceopenai_key
(String, Sensitive) The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource
Required:
cohere_key
(String, Sensitive)
Required:
openai_key
(String, Sensitive)
Required:
base_url
(String) The base URL for your OpenAI-compatible servicedimensions
(Number) The number of dimensions the embedding model is generating
Optional:
api_key
(String, Sensitive) Default: ""model_name
(String) The name of the model to use for embedding. Default: "text-embedding-ada-002"
Required:
credentials
(Attributes) (see below for nested schema)database
(String) Enter the name of the database that you want to sync data intohost
(String) Enter the account name you want to use to access the database.username
(String) Enter the name of the user you want to use to access the database
Optional:
default_schema
(String) Enter the name of the default schema. Default: "public"port
(Number) Enter the port you want to use to access the database. Default: 5432
Required:
password
(String, Sensitive) Enter the password you want to use to access the database
Required:
chunk_size
(Number) Size of chunks in tokens to store in vector store (make sure it is not too big for the context if your LLM)
Optional:
chunk_overlap
(Number) Size of overlap between chunks in tokens to store in vector store to better capture relevant context. Default: 0field_name_mappings
(Attributes List) List of fields to rename. Not applicable for nested fields, but can be used to rename fields already flattened via dot notation. (see below for nested schema)metadata_fields
(List of String) List of fields in the record that should be stored as metadata. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered metadata fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g.user.name
will access thename
field in theuser
object. It's also possible to use wildcards to access all fields in an object, e.g.users.*.name
will access allnames
fields in all entries of theusers
array. When specifying nested paths, all matching values are flattened into an array set to a field named by the path.text_fields
(List of String) List of fields in the record that should be used to calculate the embedding. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered text fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g.user.name
will access thename
field in theuser
object. It's also possible to use wildcards to access all fields in an object, e.g.users.*.name
will access allnames
fields in all entries of theusers
array.text_splitter
(Attributes) Split text fields into chunks based on the specified method. (see below for nested schema)
Required:
from_field
(String) The field name in the sourceto_field
(String) The field name to use in the destination
Optional:
by_markdown_header
(Attributes) Split the text by Markdown headers down to the specified header level. If the chunk size fits multiple sections, they will be combined into a single chunk. (see below for nested schema)by_programming_language
(Attributes) Split the text by suitable delimiters based on the programming language. This is useful for splitting code into chunks. (see below for nested schema)by_separator
(Attributes) Split the text by the list of separators until the chunk size is reached, using the earlier mentioned separators where possible. This is useful for splitting text fields by paragraphs, sentences, words, etc. (see below for nested schema)
Optional:
split_level
(Number) Level of markdown headers to split text fields by. Headings down to the specified level will be used as split points. Default: 1
Required:
language
(String) Split code in suitable places based on the programming language. must be one of ["cpp", "go", "java", "js", "php", "proto", "python", "rst", "ruby", "rust", "scala", "swift", "markdown", "latex", "html", "sol"]
Optional:
keep_separator
(Boolean) Whether to keep the separator in the resulting chunks. Default: falseseparators
(List of String) List of separator strings to split text fields by. The separator itself needs to be wrapped in double quotes, e.g. to split by the dot character, use ".". To split by a newline, use "\n".
Read-Only:
default
(Attributes) optional resource requirements to run workers (blank for unbounded allocations) (see below for nested schema)job_specific
(Attributes List) (see below for nested schema)
Read-Only:
cpu_limit
(String)cpu_request
(String)ephemeral_storage_limit
(String)ephemeral_storage_request
(String)memory_limit
(String)memory_request
(String)
Read-Only:
job_type
(String) enum that describes the different types of jobs that the platform runs. must be one of ["get_spec", "check_connection", "discover_schema", "sync", "reset_connection", "connection_updater", "replicate"]resource_requirements
(Attributes) optional resource requirements to run workers (blank for unbounded allocations) (see below for nested schema)
Read-Only:
cpu_limit
(String)cpu_request
(String)ephemeral_storage_limit
(String)ephemeral_storage_request
(String)memory_limit
(String)memory_request
(String)
Import is supported using the following syntax:
terraform import airbyte_destination_pgvector.my_airbyte_destination_pgvector ""