Note: This setup guide is for the experimental QuixLake v2 Timeseries Preview. Configuration options may change before final platform integration.
This guide walks you through the initial configuration of the QuixLake template, including setting up secrets and configuring storage.
The template requires several secrets to be configured in your Quix environment. Quix manages secrets automatically during the synchronization process.
For more details on secrets management, see the Quix Secrets Management documentation.
-
Press the Sync button in the top right corner of the Quix UI
-
Quix will prompt you to add secrets - enter values for any missing secrets
-
Deploy the pipeline - Quix will deploy all services with your configured secrets
| Secret Key | Used By | Description |
|---|---|---|
s3_user |
MinIO, API, Catalog, Sink | Username for S3-compatible storage access |
s3_secret |
MinIO, API, Catalog, Sink | Password for S3-compatible storage access |
postgres_password |
PostgreSQL, Catalog | Password for PostgreSQL database |
The initial setup uses MinIO as a local S3-compatible storage. Since MinIO is deployed fresh with your environment, you define these credentials yourself:
- Choose a username for
s3_user(e.g.,admin,minio_admin, etc.) - Choose a strong password for
s3_secret
These values will be used to:
- Initialize MinIO with these credentials (
MINIO_ROOT_USER/MINIO_ROOT_PASSWORD) - Authenticate all services that access MinIO (
AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY)
PostgreSQL is used as the metadata backend for the Iceberg Catalog. Since it's deployed fresh:
- Choose a strong password for
postgres_password
This password will be used to:
- Initialize the PostgreSQL database
- Allow the Catalog service to connect to PostgreSQL
Important: Store these credentials securely. Once set, you cannot retrieve them from Quix - you can only overwrite them with new values. If you lose these credentials, you'll need to reset them and potentially lose access to existing data.
s3_user: myadminuser
s3_secret: MySecureP@ssw0rd!2024
postgres_password: AnotherSecureP@ss!
After the synchronization completes, verify all services are running:
-
Check Deployment Status - Ensure all services start successfully:
- PostgreSQL
- MinIO
- MinIO Proxy
- Quix TS Datalake Catalog
- Quix TS Datalake API
- Quix TS Query UI
-
Verify MinIO - Access the MinIO console through the MinIO Proxy public URL to confirm storage is working
To test your setup with sample data, you first need to switch to the "Example pipeline" group in the pipeline view:
Then:
- Start the TSBS Data Generator job to produce sample time-series data
- The TSBS Transformer and Quix TS Datalake Sink services will process and store the data
- Open the Query UI (Data Explorer) to run queries:
SELECT * FROM sensordata LIMIT 10;
If you want to use AWS S3 instead of the local MinIO storage, you'll need to update several variables.
The following variables control S3/storage connectivity and must be updated in multiple deployments:
| Variable | Default Value | For AWS S3 |
|---|---|---|
AWS_ENDPOINT_URL |
http://minio:9000 |
https://s3.<region>.amazonaws.com |
AWS_REGION |
local |
Your AWS region (e.g., eu-west-1, us-east-1) |
S3_BUCKET |
quixdatalaketest |
Your AWS S3 bucket name |
You must update these variables in the following deployments:
-
Quix TS Datalake API
AWS_ENDPOINT_URL: Set tohttps://s3.<region>.amazonaws.comAWS_REGION: Set to your AWS regionS3_BUCKET: Set to your bucket name
-
Quix TS Datalake Catalog
AWS_REGION: Set to your AWS regionS3_BUCKET: Set to your bucket name
-
quix-ts-datalake-sink
AWS_ENDPOINT_URL: Set tohttps://s3.<region>.amazonaws.comAWS_REGION: Set to your AWS regionS3_BUCKET: Set to your bucket name
Update your secrets with AWS IAM credentials:
| Secret Key | Value |
|---|---|
s3_user |
Your AWS Access Key ID |
s3_secret |
Your AWS Secret Access Key |
For a bucket named my-company-datalake in eu-west-1:
Variables (in each deployment):
AWS_ENDPOINT_URL: https://s3.eu-west-1.amazonaws.com
AWS_REGION: eu-west-1
S3_BUCKET: my-company-datalakeSecrets:
s3_user: AKIAIOSFODNN7EXAMPLE
s3_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Ensure your AWS IAM user/role has the following permissions on your S3 bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::your-bucket-name",
"arn:aws:s3:::your-bucket-name/*"
]
}
]
}After migrating to AWS S3, you can optionally stop or remove the MinIO-related deployments to save resources:
minioMinio proxy
The same approach works for other S3-compatible storage providers (e.g., Google Cloud Storage, DigitalOcean Spaces, Cloudflare R2):
- Set
AWS_ENDPOINT_URLto the provider's S3-compatible endpoint - Set
AWS_REGIONas required by the provider - Update
S3_BUCKETto your bucket name - Configure
s3_userands3_secretsecrets with your provider's credentials
- Verify all secrets are configured correctly
- Check that secret names match exactly (
s3_user,s3_secret,postgres_password)
- Ensure
s3_userands3_secretsecrets are set - Check MinIO deployment logs for authentication errors
- Verify
postgres_passwordis set correctly - Check PostgreSQL is running and healthy
- Verify AWS credentials are correct
- Check IAM permissions on the S3 bucket
- Ensure
AWS_ENDPOINT_URLis set tohttps://s3.<region>.amazonaws.com(not MinIO URL) - Verify
AWS_REGIONmatches your bucket's region


