Open source team deployment and release layer for Vibe Kanban.
Vibe Kanban itself already supports team collaboration through a shared remote server. Vibe Kanban Team is the name of this distribution because it goes beyond the standard "each developer runs their own frontend installation" model and packages a team-ready, multi-user frontend setup on top of the upstream system. This repo bundles the upstream app, the downstream patch stack, the public Helm chart, and the release automation needed to run that shared installation.
This repository provides a deployment and integration layer for Vibe Kanban with:
- Helm Chart: Deploys Vibe Kanban remote server, optional relay server, and ElectricSQL
- Multi-User Frontend Runtime: Adds a shared frontend/workspace model for simultaneous browser-based use
- Linear Patch Stack: One ordered downstream patch series applied to every build
- Environment-Agnostic Images: Build once, deploy anywhere
- External Database: Bring your own PostgreSQL (CloudNativePG, RDS, etc.)
The upstream architecture already allows many frontend installations to connect to one central remote server. Vibe Kanban Team keeps that central shared backend model and adds a different frontend operating model:
- a central installation that developers can open directly in the browser without installing the stack locally
- team-ready frontend workspaces that can be used by multiple developers at the same time
- shared environments that make pair AI engineering and collaborative debugging practical
In other words, this repo is not claiming that upstream Vibe Kanban is not collaborative. It is packaging a stronger shared-workspace mode for teams that want a centrally managed setup.
┌─────────────────────────────────────────────────────────────────┐
│ Kubernetes Cluster │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ │ │ │ │
│ │ Vibe Kanban │───▶│ ElectricSQL │──┐ │
│ │ Remote Server │ │ (Sync Layer) │ │ │
│ │ │ │ │ │ │
│ │ Port: 8081 │ │ Port: 3000 │ │ │
│ └────────┬────────┘ └─────────────────┘ │ │
│ │ │ │
│ ┌────────▼────────┐ │ │
│ │ Ingress │ │ │
│ └────────┬────────┘ │ │
│ │ │ │
└───────────┼───────────────────────────────────┼──────────────────┘
│ │
▼ ▼
Internet ┌──────────────────┐
│ PostgreSQL │
│ (External DB) │
│ CloudNativePG │
│ RDS / etc. │
└──────────────────┘
- Kubernetes cluster (1.24+)
- Helm 3.x
- PostgreSQL 14+ with
wal_level=logical(CloudNativePG recommended) - kubectl configured to access your cluster
- cert-manager installed via the upstream Helm chart (do not use the MicroK8s cert-manager addon)
TLS in this chart relies on cert-manager. Install cert-manager using the upstream Helm chart so you stay on a supported release line.
For MicroK8s users, enable core addons without cert-manager:
microk8s enable dns ingress hostpath-storage community cloudnative-pg
# Intentionally skip: microk8s enable cert-managerInstall cert-manager:
CERT_MANAGER_CHART_VERSION="1.19.4" # update to latest supported patch release
helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl create namespace cert-manager --dry-run=client -o yaml | kubectl apply -f -
helm upgrade --install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version "${CERT_MANAGER_CHART_VERSION}" \
--set crds.enabled=true
kubectl -n cert-manager rollout status deploy/cert-manager --timeout=180s
kubectl -n cert-manager rollout status deploy/cert-manager-webhook --timeout=180s
kubectl -n cert-manager rollout status deploy/cert-manager-cainjector --timeout=180sCreate a ClusterIssuer (Cloudflare DNS-01 example, supports wildcard code-server port proxy hosts):
kubectl -n cert-manager create secret generic cloudflare-dns-api-token \
--from-literal=API_TOKEN='<cloudflare-api-token>'
kubectl apply -f - <<'EOF'
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: cert-manager-global
spec:
acme:
email: you@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: cert-manager-global-account-key
solvers:
- dns01:
cloudflare:
apiTokenSecretRef:
name: cloudflare-dns-api-token
key: API_TOKEN
EOFCloudflare token minimum permissions:
Zone:ReadDNS:Edit
Install from the published OCI Helm chart:
export CHART_REF="oci://ghcr.io/iamriajul/helm-charts/vibe-kanban-team"
export CHART_VERSION="<version>"If you prefer GitOps from source, you can still reference the chart in this repository. For normal installs, the OCI chart is the primary path.
Your PostgreSQL must have logical replication enabled:
-- CloudNativePG has wal_level=logical by default
-- For other providers, ensure wal_level=logical in postgresql.conf
-- Create the electric_sync role for ElectricSQL
CREATE ROLE electric_sync WITH LOGIN PASSWORD 'your-electric-password' REPLICATION;
GRANT ALL PRIVILEGES ON DATABASE your_database TO electric_sync;If you use the CNPG manifests in k8s/cnpg/, the electric_sync role is created and granted automatically via the init SQL secret. Keep the ElectricSQL password in sync with the value in k8s/cnpg/02-initdb-secret.yaml.
kubectl create namespace vibe-kanban-team
# Database connection URLs
kubectl create secret generic vibe-kanban-db \
--namespace vibe-kanban-team \
--from-literal=url='postgres://user:pass@your-db-host:5432/remote' \
--from-literal=electric-url='postgresql://electric_sync:pass@your-db-host:5432/remote?sslmode=disable'
# Application secrets
kubectl create secret generic vibe-kanban-secrets \
--namespace vibe-kanban-team \
--from-literal=jwt-secret="$(openssl rand -base64 32)" \
--from-literal=electric-role-password='your-electric-password'
# OAuth credentials
kubectl create secret generic vibe-kanban-oauth \
--namespace vibe-kanban-team \
--from-literal=github-client-id='your-github-client-id' \
--from-literal=github-client-secret='your-github-client-secret'If your image registry is private, create a pull secret and reference it in imagePullSecrets:
kubectl create secret docker-registry registry-credentials \
--namespace vibe-kanban-team \
--docker-server='your-registry.example.com' \
--docker-username='your-registry-username' \
--docker-password='your-registry-token' \
--docker-email='your-email@example.com'curl -fsSL \
https://raw.githubusercontent.com/iamriajul/vibe-kanban-team/main/helm/vibe-kanban-team/values-example.yaml \
-o values-production.yaml
# Edit values-production.yaml with your secret names and image repositories.Set global.domain to the exact frontend hostname users should open, for example vk.example.com.
For a full frontend install with remote, relay, and code-server port proxying, configure two DNS records to your ingress controller:
vk.example.com -> ingress
*.vk.example.com -> ingress
The chart derives service hosts from that domain:
frontend: vk.example.com
remote API: remote.vk.example.com
relay: relay.vk.example.com
code-server: code.vk.example.com
port proxy: <port>-code.vk.example.com
The wildcard is for derived service subdomains and code-server port proxying. Relay uses path-based routing on relay.<domain> and does not need *.relay.<domain>.
code-server runs with its own auth disabled because ingress auth owns access control. If frontend.codeServerIngress.enabled=true, the chart now requires either:
frontend.auth.enabled=truewith a supported ingress configurationfrontend.codeServerIngress.allowUnauthenticated=truewhen another layer already protects the ingress
For nginx, auth annotations are derived automatically when global.ingressClassName contains nginx. For Traefik, set frontend.auth.createTraefikMiddleware=true with a Traefik ingress class, or provide frontend.auth.protectedIngressAnnotations.
The frontend app can be preconfigured through env vars, including Coder workspace injection:
VIBE_KANBAN_EDITOR_TYPE=CODE_SERVERVIBE_KANBAN_CODE_SERVER_URL=https://code.vk.example.com/(CODE_SERVER_URLalso works)VIBE_KANBAN_BYPASS_ONBOARDING=true
helm upgrade --install vibe-kanban "${CHART_REF}" \
--version "${CHART_VERSION}" \
--namespace vibe-kanban-team \
--create-namespace \
-f values-production.yamlIf you want the chart's complete raw defaults for reference, you can still inspect them with:
helm show values "${CHART_REF}" --version "${CHART_VERSION}"If you want to pin a specific image tag, use:
scripts/deploy.sh <commit-sha>This chart follows the same pattern as the Coder Helm chart: reference your own Kubernetes secrets via secretKeyRef.
env:
# Database connection (REQUIRED)
- name: SERVER_DATABASE_URL
valueFrom:
secretKeyRef:
name: vibe-kanban-db
key: url
# JWT secret (REQUIRED)
- name: VIBEKANBAN_REMOTE_JWT_SECRET
valueFrom:
secretKeyRef:
name: vibe-kanban-secrets
key: jwt-secret
# ElectricSQL role password (REQUIRED)
- name: ELECTRIC_ROLE_PASSWORD
valueFrom:
secretKeyRef:
name: vibe-kanban-secrets
key: electric-role-password
# GitHub OAuth (REQUIRED - at least one OAuth provider)
- name: GITHUB_OAUTH_CLIENT_ID
valueFrom:
secretKeyRef:
name: vibe-kanban-oauth
key: github-client-id
- name: GITHUB_OAUTH_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: vibe-kanban-oauth
key: github-client-secret
# ElectricSQL database connection
electric:
enabled: true
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: vibe-kanban-db
key: electric-url| Variable | Description |
|---|---|
SERVER_DATABASE_URL |
PostgreSQL connection URL |
VIBEKANBAN_REMOTE_JWT_SECRET |
JWT secret (generate with openssl rand -base64 32) |
ELECTRIC_ROLE_PASSWORD |
Password for the electric_sync database role |
GITHUB_OAUTH_CLIENT_ID |
GitHub OAuth client ID |
GITHUB_OAUTH_CLIENT_SECRET |
GitHub OAuth client secret |
If you're using the CNPG manifests, set ELECTRIC_ROLE_PASSWORD to the same value as CHANGEME_ELECTRIC_PASSWORD in k8s/cnpg/02-initdb-secret.yaml.
To support tunnel/relay features, enable the relay section in values and configure:
relay.enabled: truerelay.envwithSERVER_DATABASE_URLandVIBEKANBAN_REMOTE_JWT_SECRET(same DB/JWT as remote)relay.ingressorglobal.domainso the chart exposes one relay host (for examplerelay.example.com)- keep
relay.proxyUnderRemoteIngress.enabled: trueso relay endpoints are available under the main remote API host (/v1/relayand/relay/h) for reusable frontend images
scripts/deploy.sh now sets both image.tag and relay.image.tag to the requested release tag.
Your PostgreSQL database must have:
-
Logical replication enabled:
wal_level=logical- CloudNativePG: Enabled by default
- Other providers: Set in
postgresql.conf
-
ElectricSQL role: User with
REPLICATIONprivilegeCREATE ROLE electric_sync WITH LOGIN PASSWORD 'xxx' REPLICATION;
If you use the CNPG manifests, the role is created by the init SQL secret.
- Go to GitHub → Settings → Developer settings → OAuth Apps
- Create new OAuth App:
- Homepage URL:
https://your-domain.com - Callback URL:
https://your-domain.com/v1/oauth/callback/github
- Homepage URL:
- Copy Client ID and Client Secret
- Go to Google Cloud Console → APIs & Services → Credentials
- Create OAuth 2.0 Client ID:
- Application type: Web application
- Authorized redirect URIs:
https://your-domain.com/v1/oauth/callback/google
- Copy Client ID and Client Secret
GitHub Actions now handle the checked-in release flows:
remote-v*tags build Remote and Relay images, push to GHCR, optionally mirror to Docker Hub, and publish the Helm chart to GHCR as an OCI artifactv*tags publish thevibe-kanban-teamnpm package throughscripts/publish-npm.sh
For stable releases, the image workflow also updates the latest tag. Prereleases publish only their version tag.
The upstream project is still Vibe Kanban. Vibe Kanban Team names this public distribution layer:
- it keeps the upstream shared remote server model, but adds a team-ready frontend/workspace deployment shape
- it lets developers use the platform without installing the full stack on their own machines
- it lets developers share live workspaces for pair AI engineering, review, and debugging
- it gives teams a reproducible central environment instead of many drifting local installs
- it shortens onboarding because a new developer can start from a browser session instead of a full local setup
- it makes support, upgrades, and operational policy easier because the environment is managed centrally
- it publishes the npm entrypoint, container images, and Helm chart under one public name
The goal is to make the “run Vibe Kanban for a team with shared workspaces” path obvious and easy to adopt.
We track upstream releases from the Vibe Kanban GitHub repo and bump the shared vibe-kanban/ submodule when we want a new feature or fix. Keep it simple:
- Watch for new upstream releases (GitHub Releases/notifications).
- Decide the version to adopt (e.g.
v1.4.0). - Update the shared submodule and patch stack.
- Push the release tag that matches the artifact flow you want.
- Deploy by pinning the image tag.
We keep downstream changes as a small patch stack in patches/ (similar to quilt). The local scripts and GitHub Actions both apply this same stack before building.
cd vibe-kanban
git checkout <upstream-tag>
# Make your change(s)
git add .
git commit -m "fix: <summary>"
git format-patch -1 -o ../patches
cd ..
ls patchesRename the patch into the next NNNN-...patch slot and add it to patches/series in the order you want it applied.
scripts/apply-patches.shKeep the patch stack minimal and prefer upstreaming when possible.
# 1) Update the shared submodule to a tag or commit
scripts/update-vibe-kanban.sh v1.4.0
# 2) Review and commit
git status
git commit -m "chore: bump vibe-kanban to v1.4.0"
git pushAfter merge, CI builds and pushes a new image tagged with the commit SHA.
# 3) Deploy the new build by pinning the image tag
scripts/deploy.sh <commit-sha>If you want a versioned release tag for this repo (optional), create a tag like v1.4.0 and push it. CI will also publish a release image tag and a chart package.
kubectl get pods -n vibe-kanban-team
kubectl describe pod <pod-name> -n vibe-kanban-team# Vibe Kanban server
kubectl logs -n vibe-kanban-team -l app.kubernetes.io/name=vibe-kanban-team -f
# ElectricSQL
kubectl logs -n vibe-kanban-team -l app.kubernetes.io/component=electric -fkubectl port-forward -n vibe-kanban-team svc/<release>-electric 3000:3000
curl http://localhost:3000/v1/healthThis deployment configuration is provided under the MIT License. Vibe Kanban is licensed under the BSL License.