Build real-time Postgres replication applications in Rust
Documentation
·
Examples
·
Issues
ETL is a Rust framework by Supabase for building high‑performance, real‑time data replication apps on Postgres. It sits on top of Postgres logical replication and gives you a clean, Rust‑native API for streaming changes to your own destinations.
- Real‑time replication: stream changes in real time to your own destinations
- High performance: configurable batching and parallelism to maximize throughput
- Fault-tolerant: robust error handling and retry logic built-in
- Extensible: implement your own custom destinations and state/schema stores
- Production destinations: BigQuery and Apache Iceberg officially supported
- Type-safe: fully typed Rust API with compile-time guarantees
PostgreSQL Version: ETL officially supports and tests against PostgreSQL 14, 15, 16, and 17.
- PostgreSQL 15+ is recommended for access to advanced publication features including:
- Column-level filtering
- Row-level filtering with
WHEREclauses FOR ALL TABLES IN SCHEMAsyntax
For detailed configuration instructions, see the Configure Postgres documentation.
Install via Git while we prepare for a crates.io release:
[dependencies]
etl = { git = "https://github.com/supabase/etl" }Quick example using the in‑memory destination:
use etl::{
config::{BatchConfig, PgConnectionConfig, PipelineConfig, TlsConfig},
destination::memory::MemoryDestination,
pipeline::Pipeline,
store::both::memory::MemoryStore,
};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let pg = PgConnectionConfig {
host: "localhost".into(),
port: 5432,
name: "mydb".into(),
username: "postgres".into(),
password: Some("password".into()),
tls: TlsConfig { enabled: false, trusted_root_certs: String::new() },
};
let store = MemoryStore::new();
let destination = MemoryDestination::new();
let config = PipelineConfig {
id: 1,
publication_name: "my_publication".into(),
pg_connection: pg,
batch: BatchConfig { max_size: 1000, max_fill_ms: 5000 },
table_error_retry_delay_ms: 10_000,
table_error_retry_max_attempts: 5,
max_table_sync_workers: 4,
};
// Start the pipeline.
let mut pipeline = Pipeline::new(config, store, destination);
pipeline.start().await?;
// Wait for the pipeline indefinitely.
pipeline.wait().await?;
Ok(())
}For tutorials and deeper guidance, see the Documentation or jump into the examples.
ETL is designed to be extensible. You can implement your own destinations, and the project currently ships with the following maintained options:
- BigQuery – full CRUD-capable replication for analytics workloads
- Apache Iceberg – append-only log of operations (updates coming soon)
Enable the destinations you need through the etl-destinations crate:
[dependencies]
etl = { git = "https://github.com/supabase/etl" }
etl-destinations = { git = "https://github.com/supabase/etl", features = ["bigquery"] }See DEVELOPMENT.md for setup instructions, migration workflows, and development guidelines.
We welcome pull requests and GitHub issues. We currently cannot accept new custom destinations unless there is significant community demand, as each destination carries a long-term maintenance cost. We are prioritizing core stability, observability, and ergonomics. If you need a destination that is not yet supported, please start a discussion or issue so we can gauge demand before proposing an implementation.
Apache‑2.0. See LICENSE for details.
Made with ❤️ by the Supabase team