kamal-previews keeps the secret surface area as small as it can. The only secrets the GitHub Actions workflow itself needs are:
SSH_PRIVATE_KEY— registered with the deploy host. Used both for Kamal's SSH connection and for kamal-previews' database scripts that ssh to the host to run clone/drop operations.- An admin database URL — only for postgres / mysql. By default the
action sources your
base-secrets-fileand readsDATABASE_URL, meaning no extra secret is needed if your staging app role hasCREATEDB. Override withDATABASE_ADMIN_URLonly when you need a different role. RAILS_MASTER_KEY— only when yourbase-secrets-fileshells out tobin/rails credentials:fetch(Rails encrypted credentials). The runner needs the master key to evaluate that file headlessly.
Everything else flows through Kamal's own kamal secrets mechanism,
which is fully under your control.
Each entry in databases: is named after the env var the per-PR app
should see. The action picks behavior based on the suffix:
databases: |
DATABASE_URL=myapp_staging:myapp_{db_slug}
QUEUE_DATABASE_URL=myapp_staging_queue:myapp_queue_{db_slug}For each entry, the action:
- Clones source DB → per-PR target DB
- Sources
base-secrets-fileto read the original URL (e.g.$DATABASE_URL) - Rewrites the URL's database-name path segment to the per-PR target — scheme, userinfo, host, port, query string are preserved byte-for-byte
- Injects the rewritten URL via
env.secretat deploy time, by appending an override line to the generated.kamal/secrets.<dest>file
Your database.yml, Rails credentials, and existing
.kamal/secrets.staging stay unchanged. The container sees
DATABASE_URL=postgresql://user:pass@host:5432/myapp_<slug>?sslmode=...
— a fully-formed URL pointing at the per-PR clone, with the same
credentials your staging app uses.
databases: |
DATABASE_NAME=myapp_staging:myapp_{db_slug}The resolved per-PR DB name (e.g. myapp_<slug>) is written to
env.clear[DATABASE_NAME] AND exported to $GITHUB_ENV. Use this when
your app reads ENV["DATABASE_NAME"] directly in database.yml, or when
you want to build URLs yourself in your secrets file. Examples:
database.yml reads the name directly — no secrets-file changes:
staging:
primary:
database: <%= ENV["DATABASE_NAME"] %>
# …host/user/password from URL or other envBuild URLs in your secrets file — $DATABASE_NAME lives in the runner env:
DATABASE_URL=postgres://${PG_USER}:${PG_PASSWORD}@${PG_HOST}/${DATABASE_NAME}?sslmode=requireLook up secrets keyed by per-PR DB name (1Password / Doppler / etc.):
SECRETS=$(kamal secrets fetch --adapter doppler myapp "preview-${DATABASE_NAME}")
DATABASE_URL=$(kamal secrets extract DATABASE_URL "$SECRETS")It helps to separate "secrets the workflow needs" from "secrets the deployed app needs":
| Layer | What | Where to put it |
|---|---|---|
| Workflow | SSH key, DB admin creds | GitHub Actions repository secrets |
| Application | RAILS_MASTER_KEY, API keys, DB user creds the app uses to connect | Kamal secrets (sourced by your .kamal/secrets.staging file) |
The workflow's SSH key is the only thing that has to live in GitHub
Actions secrets. The application secrets can live anywhere Kamal can fetch
them from — 1Password, AWS Secrets Manager, GCP Secret Manager, Doppler,
Bitwarden, Passbolt, or just a plaintext .kamal/secrets.staging file
(don't do that).
A typical 1Password-backed .kamal/secrets.staging:
# .kamal/secrets.staging
SECRETS=$(kamal secrets fetch \
--adapter 1password \
--account yourorg.1password.com \
--from "VAULT_ID/ITEM_ID" \
KAMAL_REGISTRY_USERNAME \
KAMAL_REGISTRY_PASSWORD \
RAILS_MASTER_KEY \
DB_PASSWORD \
)
KAMAL_REGISTRY_USERNAME=$(kamal secrets extract KAMAL_REGISTRY_USERNAME "$SECRETS")
KAMAL_REGISTRY_PASSWORD=$(kamal secrets extract KAMAL_REGISTRY_PASSWORD "$SECRETS")
RAILS_MASTER_KEY=$(kamal secrets extract RAILS_MASTER_KEY "$SECRETS")
DB_PASSWORD=$(kamal secrets extract DB_PASSWORD "$SECRETS")The 1Password CLI needs to be authenticated. In CI, the cleanest path is a service account token:
# In your repo's .github/workflows/preview.yml, add an env var that
# kamal-previews' workflow inherits. Workflows called via `uses:` see env
# vars set on the calling workflow.
jobs:
preview:
uses: web-ascender/github-actions-kamal-previews/.github/workflows/preview.yml@v1
with:
base-secrets-file: .kamal/secrets.staging
...
secrets:
SSH_PRIVATE_KEY: ${{ secrets.DEPLOY_SSH_KEY }}Then add a small calling workflow that wraps the call to install the 1Password CLI before the reusable workflow runs:
# Or, simpler, change `.kamal/secrets.staging` to use the OP_SERVICE_ACCOUNT_TOKEN
# directly with `op` CLI calls; install the CLI as part of your Dockerfile.# .kamal/secrets.staging
SECRETS=$(kamal secrets fetch --adapter aws_secrets_manager \
--region us-east-1 \
myapp/staging/web)
RAILS_MASTER_KEY=$(kamal secrets extract RAILS_MASTER_KEY "$SECRETS")
# …The runner needs AWS credentials. Use OIDC (id-token: write in
permissions, plus an aws-actions/configure-aws-credentials@v4 step in
your calling workflow) so you don't have to store long-lived AWS keys.
# .kamal/secrets.staging
SECRETS=$(kamal secrets fetch --adapter doppler myapp staging)
RAILS_MASTER_KEY=$(kamal secrets extract RAILS_MASTER_KEY "$SECRETS")DOPPLER_TOKEN is read from the runner environment.
The default behavior is to use the same secrets bag as staging for every preview environment. That's almost always what you want — preview apps talk to the same external services as staging.
If you want truly per-PR secrets (e.g., a separate SECRET_KEY_BASE so
session cookies don't carry across previews):
-
Set
env-secret-overridesin the workflow inputs to add new entries toenv.secret. -
Reference those entries from your
.kamal/secrets.stagingfile with theKAMAL_DESTINATIONenv var, which Kamal sets automatically:PREVIEW_SECRET_KEY_BASE=$(kamal secrets fetch --adapter ... \ "myapp/preview/${KAMAL_DESTINATION}/SECRET_KEY_BASE")
-
Provision the per-PR entries up front, or generate them deterministically from a master seed:
SHA256(seed + slug).
A built-in "deterministic per-PR secret derivation" feature is on the roadmap.
Worth a moment of thought:
-
Preview apps share the same database cluster as staging. A preview with a broken migration could in theory damage shared resources (e.g. drop a table from the public schema). The default
pg_dump | psqlflow gives each preview its own isolated database, so this is unlikely in practice — but if you let preview apps connect with cluster-superuser credentials, you've granted them more authority than they need. Use a dedicated per-app role with permissions only on its own database. -
Preview apps share the same secrets as staging. They can call any third-party service staging can. If your staging API keys grant access to production-shaped data, that's the leakage surface. Mitigate by using sandbox API credentials in staging-equivalent secrets bags.
-
Preview URLs are public DNS. Anyone who can guess the slug can visit. The cookie-domain footgun aside (see
docs/dns-and-tls.md), put HTTP basic auth or an SSO gate in front of preview environments if your staging data is sensitive.