⚠️ ALPHA VERSION - This is an early release for testing and feedback. Not recommended for production use without thorough testing. Please report issues and contribute improvements!
A progressive migration system for Cloudflare Workers that allows you to migrate from D1 to Postgres without downtime. Works with any existing D1 database - automatically discovers your schema and generates the appropriate Postgres migration scripts.
- Works with any D1 database - No need to modify your existing schema
- Automatic schema discovery - Infers your D1 table structure and generates Postgres equivalents
- Progressive migration path from D1 to Postgres
- Automatic write mirroring using Cloudflare Queues
- Configurable primary database switching
- Built-in conflict resolution with operation IDs
- Streaming data export for existing D1 data
- Generic SQL execution - Execute any SQL query through the API
You'll need the following resources:
- Cloudflare account with Workers enabled
- Existing D1 database with your data and schema
- Postgres database accessible from the internet
- Hyperdrive connection configured for your Postgres database
- Cloudflare Queue for async mirroring
- Node.js 18 or later
- Wrangler CLI (
npm install -g wrangler)
Before deploying this alpha version, make sure you have:
- Existing D1 Database: Use your current D1 database ID in wrangler.toml
- Created Queue: Run
wrangler queues create mirror-writes - Setup Hyperdrive: Create connection in Cloudflare Dashboard for your Postgres
- Updated wrangler.toml: Replace placeholder IDs with your actual resource IDs
- Generated Migration Script: Use
/migration-scriptendpoint to create Postgres schema - Setup Postgres Schema: Run the generated migration script in your Postgres database
- Tested Locally: Verify
npm run devworks and API endpoints respond - Validated Configuration: Run
npm run validateto check D1 setup
cd d1-auto-mirror-extended
npm installwrangler loginCreate the queue (you should already have a D1 database):
wrangler queues create mirror-writesCreate a Hyperdrive connection:
- Navigate to Cloudflare Dashboard → Hyperdrive
- Create a new connection using your Postgres database details
- Copy the generated Hyperdrive ID
Edit wrangler.toml and replace the placeholder values with your existing D1 database:
[[d1_databases]]
binding = "DB"
database_name = "your-existing-db-name"
database_id = "YOUR_EXISTING_D1_DATABASE_ID" # Get from: wrangler d1 list
[[hyperdrive]]
binding = "PG"
id = "YOUR_HYPERDRIVE_ID" # From Cloudflare dashboard
[[queues.producers]]
binding = "MIRROR_QUEUE"
queue = "mirror-writes"Start the worker:
npm run devIn another terminal, generate the Postgres migration script from your D1 schema:
curl "http://localhost:8787/migration-script" > postgres-migration.sqlReview the generated script and run it in your Postgres database:
psql your_postgres_db < postgres-migration.sqlExport your existing D1 data to Postgres:
# Get list of tables
curl "http://localhost:8787/tables"
# Export each table
curl "http://localhost:8787/export?table=your_table_name" > your_table_export.sql
# Import to Postgres
psql your_postgres_db < your_table_export.sqlExecute a test query to verify both databases work:
curl -X POST http://localhost:8787/execute \
-H "Content-Type: application/json" \
-d '{"sql":"SELECT * FROM your_table_name LIMIT 5"}'Verify data appears in both D1 and Postgres databases.
Execute any SQL query on your database.
Request:
{
"sql": "SELECT * FROM users WHERE active = ?",
"params": [true]
}Response:
{
"success": true,
"results": [...],
"meta": {
"changes": 0,
"rows_read": 5,
"rows_written": 0
}
}Get complete database schema information.
Response:
{
"tables": ["users", "posts", "comments"],
"schema": {
"users": [
{"name": "id", "type": "INTEGER", "pk": 1, "notnull": 1},
{"name": "email", "type": "TEXT", "pk": 0, "notnull": 1}
]
}
}Get list of all tables in your database.
Response:
{
"tables": ["users", "posts", "comments"]
}Generate Postgres migration script from your D1 schema.
Response: SQL file download with CREATE TABLE statements
Export data from D1 for migration.
Parameters:
table- Table name to exportformat-sql(default) orjsonbatchSize- Rows per batch (default: 1000)
Examples:
# Export as SQL
curl "http://localhost:8787/export?table=users" > users_export.sql
# Export as JSON
curl "http://localhost:8787/export?table=users&format=json" > users_export.json
# List all tables
curl "http://localhost:8787/export"[vars]
PRIMARY_DB = "d1"- All reads from D1
- All writes to D1 + async mirror to Postgres via Queue
- Postgres builds up identical dataset
[vars]
PRIMARY_DB = "pg"- All reads from Postgres
- All writes to Postgres + sync write to D1
- D1 becomes backup/fallback
Once you're confident in the Postgres setup, you can remove the D1 logic entirely.
┌─────────────────┐ ┌──────────────┐ ┌─────────────────┐
│ Your App │───▶│ Worker │───▶│ D1 Database │
│ (Any Schema) │ │ (Generic) │ │ (Your Tables) │
└─────────────────┘ └──────────────┘ └─────────────────┘
│
▼
┌──────────────┐ ┌─────────────────┐
│ SQL Mirror │───▶│ Postgres │
│ (Queue) │ │ (Mirrored Data) │
└──────────────┘ └─────────────────┘
- AutoMirrorDB: Patches the D1 client to automatically queue writes for mirroring
- Generic DB Router: Routes any SQL operation based on the PRIMARY_DB configuration
- Queue Consumer: Processes queued write operations to Postgres
- Schema Discovery: Automatically infers D1 schema and generates Postgres equivalents
- Export System: Streams existing D1 data for migration
PRIMARY_DB: Set to"d1"or"pg"to determine which database handles readsPG_DSN: Direct Postgres connection string (can be used instead of Hyperdrive for local dev)
npm run deploy
⚠️ Known Issues & Limitations:
- No authentication/authorization on API endpoints
- Limited error handling and retry logic
- No rate limiting or abuse protection
- Basic schema conversion (may need manual adjustment for complex schemas)
- No monitoring/observability built-in
- Connection pooling not optimized for high traffic
Before using in production:
- Add proper authentication to your endpoints
- Implement comprehensive error handling
- Add monitoring and alerting
- Test thoroughly with your specific use case
- Plan rollback procedures
Database not found error:
Verify the database ID in wrangler.toml matches your actual D1 database. List your databases with wrangler d1 list to confirm.
Hyperdrive connection failures: Ensure your Postgres database is accessible from the internet, verify the connection string format, and test the connection through the Cloudflare Dashboard.
Queue not processing messages:
Check that the queue name in wrangler.toml matches your actual queue, and monitor worker logs with wrangler tail for errors.
Data inconsistency between databases: Look for failed queue messages, verify operation IDs are working, and review error logs for transaction failures.
Schema conversion issues: The auto-generated migration script may need manual adjustments for complex schemas. Review the generated SQL before running it.
- Check worker logs:
wrangler tail - Verify all IDs in
wrangler.tomlare correct - Test each component individually
- Review Cloudflare Dashboard for resource status
- Use
/schemaendpoint to verify schema detection
This is an alpha release - contributions and feedback are very welcome!
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
MIT License