| title | Manage bulk export destinations |
|---|---|
| description | Configure and manage S3-compatible export destinations for LangSmith bulk exports. |
Update the LangSmith URL in the requests below for self-hosted installs, EU (GCP) (eu.api.smith.langchain.com), or US (AWS) (aws.api.smith.langchain.com).
A destination is a named configuration that tells LangSmith where to write exported trace data. You create a destination once, then reference it by ID when creating export jobs. LangSmith currently supports S3 and any S3-compatible bucket (such as GCS or MinIO) as a destination. Exported data is written in Parquet columnar format and contains equivalent fields to the Run data format.
This page covers:
- The configuration fields needed to set up a destination.
- Required bucket permissions for AWS S3 and GCS.
- How to create a destination via the API, including provider-specific examples and credential options.
- How to rotate destination credentials without recreating the destination.
- How to debug destination errors.
The following information is needed to configure a destination:
- Bucket Name: The name of the S3 bucket where the data will be exported to.
- Prefix: The root prefix within the bucket where the data will be exported to.
- S3 Region: The region of the bucket—required for AWS S3 buckets.
- Endpoint URL: The endpoint URL for the S3 bucket—required for S3 API compatible buckets.
- Access Key: The access key for the S3 bucket.
- Secret Key: The secret key for the S3 bucket.
- Include Bucket in Prefix (optional): Whether to include the bucket name as part of the path prefix. Defaults to
true. Set tofalsewhen using virtual-hosted style endpoints where the bucket name is already in the endpoint URL.
We support any S3 compatible bucket. For non-AWS buckets such as GCS or MinIO, you will need to provide the endpoint URL.
Both the backend and queue services require write access to the destination bucket:
- The
backendservice attempts to write a test file to the destination bucket when the export destination is created. It will delete the test file if it has permission to do so (delete access is optional). - The
queueservice is responsible for bulk export execution and uploading the files to the bucket.
The minimal AWS S3 permission policy relies on the following permissions:
s3:PutObject(required): Allows writing Parquet files to the bucket.s3:DeleteObject(optional): Cleans up test files during destination creation. If this permission isn't present, the file is left under the/tmpdirectory after destination creation.s3:GetObject(optional but recommended): Verifies file size after writing.s3:AbortMultipartUpload(optional but recommended): Avoids dangling multipart uploads.
Minimal IAM policy example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::YOUR_BUCKET_NAME/*"
]
}
]
}Recommended IAM policy example with additional permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:DeleteObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::YOUR_BUCKET_NAME/*"
]
}
]
}When using GCS with the S3-compatible XML API, the following IAM permissions are required:
storage.objects.create(required): Allows writing files to the bucket.storage.objects.delete(optional): Cleans up test files during destination creation. If this permission isn't present, the file is left under the/tmpdirectory after destination creation.storage.objects.get(optional but recommended): Verifies file size after writing.
These permissions can be granted through the "Storage Object Admin" predefined role or a custom role.
The following example demonstrates how to create a destination using cURL. Replace the placeholder values with your actual configuration details. Note that credentials will be stored securely in an encrypted form in our system.
curl --request POST \
--url 'https://api.smith.langchain.com/api/v1/bulk-exports/destinations' \
--header 'Content-Type: application/json' \
--header 'X-API-Key: YOUR_API_KEY' \
--header 'X-Tenant-Id: YOUR_WORKSPACE_ID' \
--data '{
"destination_type": "s3",
"display_name": "My S3 Destination",
"config": {
"bucket_name": "your-s3-bucket-name",
"prefix": "root_folder_prefix",
"region": "your aws s3 region",
"endpoint_url": "your endpoint url for s3 compatible buckets",
"include_bucket_in_prefix": true // defaults to true, can be omitted
},
"credentials": {
"access_key_id": "YOUR_S3_ACCESS_KEY_ID",
"secret_access_key": "YOUR_S3_SECRET_ACCESS_KEY"
}
}'Use the returned id to reference this destination in subsequent bulk export operations.
If you receive an error while creating a destination, see Debug destination errors for details on how to debug this.
Requires LangSmith Helm version >= 0.10.34 (application version >= 0.10.91)
We support the following additional credentials formats besides static access_key_id and secret_access_key:
- To use temporary credentials that include an AWS session token,
additionally provide the
credentials.session_tokenkey when creating the bulk export destination. - (Self-hosted only): To use environment-based credentials such as with AWS IAM Roles for Service Accounts (IRSA),
omit the
credentialskey from the request when creating the bulk export destination. In this case, the standard Boto3 credentials locations will be checked in the order defined by the library.
For AWS S3, you can leave off the endpoint_url and supply the region that matches the region of your bucket.
curl --request POST \
--url 'https://api.smith.langchain.com/api/v1/bulk-exports/destinations' \
--header 'Content-Type: application/json' \
--header 'X-API-Key: YOUR_API_KEY' \
--header 'X-Tenant-Id: YOUR_WORKSPACE_ID' \
--data '{
"destination_type": "s3",
"display_name": "My AWS S3 Destination",
"config": {
"bucket_name": "my_bucket",
"prefix": "data_exports",
"region": "us-east-1"
},
"credentials": {
"access_key_id": "YOUR_S3_ACCESS_KEY_ID",
"secret_access_key": "YOUR_S3_SECRET_ACCESS_KEY"
}
}'When using Google's GCS bucket, you need to use the XML S3 compatible API, and supply the endpoint_url
which is typically https://storage.googleapis.com.
Here is an example of the API request when using the GCS XML API which is compatible with S3:
curl --request POST \
--url 'https://api.smith.langchain.com/api/v1/bulk-exports/destinations' \
--header 'Content-Type: application/json' \
--header 'X-API-Key: YOUR_API_KEY' \
--header 'X-Tenant-Id: YOUR_WORKSPACE_ID' \
--data '{
"destination_type": "s3",
"display_name": "My GCS Destination",
"config": {
"bucket_name": "my_bucket",
"prefix": "data_exports",
"endpoint_url": "https://storage.googleapis.com"
"include_bucket_in_prefix": true // defaults to true, can be omitted
},
"credentials": {
"access_key_id": "YOUR_S3_ACCESS_KEY_ID",
"secret_access_key": "YOUR_S3_SECRET_ACCESS_KEY"
}
}'See Google documentation for more info
If your endpoint URL already includes the bucket name (virtual-hosted style), set include_bucket_in_prefix to false to avoid duplicating the bucket name in the path:
curl --request POST \
--url 'https://api.smith.langchain.com/api/v1/bulk-exports/destinations' \
--header 'Content-Type: application/json' \
--header 'X-API-Key: YOUR_API_KEY' \
--header 'X-Tenant-Id: YOUR_WORKSPACE_ID' \
--data '{
"destination_type": "s3",
"display_name": "My Virtual-Hosted Destination",
"config": {
"bucket_name": "my_bucket",
"prefix": "data_exports",
"endpoint_url": "https://my_bucket.s3.us-east-1.amazonaws.com",
"include_bucket_in_prefix": false
},
"credentials": {
"access_key_id": "YOUR_S3_ACCESS_KEY_ID",
"secret_access_key": "YOUR_S3_SECRET_ACCESS_KEY"
}
}'Use PATCH /api/v1/bulk-exports/destinations/{destination_id} to update the credentials on an existing destination. This lets you rotate or replace credentials without recreating the destination or its associated bulk exports. The destination configuration (bucket, prefix, region, endpoint, etc.) is unchanged—only the credentials are replaced.
The changeover is not instantaneous:
- New bulk export runs use the updated credentials immediately after the PATCH completes.
- Already running bulk export runs continue using the previous credentials until they finish.
- Both sets of credentials are active simultaneously during the transition period. This window lasts up to the maximum runtime of a single bulk export run.
Plan your rotation accordingly: the old credentials must remain valid until all in-flight runs complete.
curl --request PATCH \
--url 'https://api.smith.langchain.com/api/v1/bulk-exports/destinations/{destination_id}' \
--header 'Content-Type: application/json' \
--header 'X-API-Key: YOUR_API_KEY' \
--header 'X-Tenant-Id: YOUR_WORKSPACE_ID' \
--data '{
"credentials": {
"access_key_id": "YOUR_NEW_ACCESS_KEY_ID",
"secret_access_key": "YOUR_NEW_SECRET_ACCESS_KEY"
}
}'The session_token field is optional, which you can include for temporary credentials.
Required permission: WORKSPACES_MANAGE
Before storing new credentials, LangSmith validates them by performing a test write to the bucket using the existing destination configuration. The request fails with 400 if the credentials do not have sufficient write permissions. If the request fails, refer to Debug destination errors.
Returns the updated destination object. Credential values are never returned—only the credential field names are included in the response under credentials_keys.
{
"id": "destination-uuid",
"tenant_id": "tenant-uuid",
"created_at": "2025-01-01T00:00:00Z",
"updated_at": "2025-06-01T00:00:00Z",
"credentials_keys": ["access_key_id", "secret_access_key"]
}- Provision new credentials in your cloud provider with write access to the destination bucket and prefix.
- Call the PATCH endpoint with the new credentials. LangSmith validates them before saving.
- Keep old credentials active until all in-flight bulk export runs finish (up to the maximum run duration).
- Revoke old credentials once no runs are using them.
The destinations API endpoint will validate that the destination and credentials are valid and that write access is present for the bucket.
If you receive an error, and would like to debug this error, you can use the AWS CLI to test the connectivity to the bucket. You should be able to write a file with the CLI using the same data that you supplied to the destinations API above.
AWS S3:
aws configure
# set the same access key credentials and region as you used for the destination
> AWS Access Key ID: <access_key_id>
> AWS Secret Access Key: <secret_access_key>
> Default region name [us-east-1]: <region>
# List buckets
aws s3 ls /
# test write permissions
touch ./test.txt
aws s3 cp ./test.txt s3://<bucket-name>/tmp/test.txtGCS Compatible Buckets:
You will need to supply the endpoint_url with --endpoint-url option.
For GCS, the endpoint_url is typically https://storage.googleapis.com:
aws configure
# set the same access key credentials and region as you used for the destination
> AWS Access Key ID: <access_key_id>
> AWS Secret Access Key: <secret_access_key>
> Default region name [us-east-1]: <region>
# List buckets
aws s3 --endpoint-url=<endpoint_url> ls /
# test write permissions
touch ./test.txt
aws s3 --endpoint-url=<endpoint_url> cp ./test.txt s3://<bucket-name>/tmp/test.txtHere are some common errors:
| Error | Description |
|---|---|
| Access denied | The blob store credentials or bucket are not valid. This error occurs when the provided access key and secret key combination doesn't have the necessary permissions to access the specified bucket or perform the required operations. |
| Bucket is not valid | The specified blob store bucket is not valid. This error is thrown when the bucket doesn't exist or there is not enough access to perform writes on the bucket. |
| Key ID you provided does not exist | The blob store credentials provided are not valid. This error occurs when the access key ID used for authentication is not a valid key. |
| Invalid endpoint | The endpoint_url provided is invalid. This error is raised when the specified endpoint is an invalid endpoint. Only S3 compatible endpoints are supported, for example https://storage.googleapis.com for GCS, https://play.min.io for minio, etc. If using AWS, you should omit the endpoint_url. |