This guide demonstrates how the kagenti-operator webhook automatically injects AuthBridge sidecars into your deployments for transparent OAuth 2.0 token exchange.
Note: The webhook is deployed via kagenti/kagenti-operator. See the operator docs for installation.
The operator webhook watches for deployments with the kagenti.io/inject: enabled label and automatically injects AuthBridge sidecars. There are two injection modes controlled by the combinedSidecar feature gate:
| Container | Purpose |
|---|---|
proxy-init |
Init container that sets up iptables to redirect inbound and outbound traffic |
spiffe-helper |
Fetches SPIFFE credentials from SPIRE (only with kagenti.io/spire: enabled) |
kagenti-client-registration |
Registers the workload with Keycloak (using SPIFFE ID or static client ID) |
envoy-proxy |
Intercepts inbound HTTP requests (JWT validation) and outbound requests (HTTP: token exchange; HTTPS: TLS passthrough) |
| Container | Purpose |
|---|---|
proxy-init |
Init container that sets up iptables (same as separate mode) |
authbridge |
Single sidecar combining Envoy, authbridge, spiffe-helper, and client-registration |
Combined mode reduces per-pod overhead from 3 long-running sidecars to 1, simplifies debugging, and speeds up pod startup. See Enabling Combined Sidecar Mode below.
┌────────────────────────────────────────────────────────────────────┐
│ Agent Pod │
│ ┌─────────────┐ ┌──────────────┐ ┌────────────────────────────┐ │
│ │ agent │ │spiffe-helper │ │kagenti-client-registration │ |
│ │ (your app) │ │ │ │ │ │
│ └──────┬──────┘ └──────────────┘ └────────────────────────────┘ │
│ │ │
│ │ HTTP Request with Token (aud: agent-spiffe-id) │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ envoy-proxy │ │
│ │ Inbound (port 15124): │ │
│ │ 1. Intercepts incoming traffic (via iptables PREROUTING) │ │
│ │ 2. Validates JWT (signature + issuer via JWKS) │ │
│ │ 3. Returns 401 if invalid, forwards if valid │ │
│ │ Outbound (port 15123): │ │
│ │ 1. Intercepts outbound traffic (via iptables OUTPUT) │ │
│ │ 2. Detects protocol via tls_inspector │ │
│ │ HTTP: Extracts Bearer token, exchanges via Keycloak, │ │
│ │ replaces token in request │ │
│ │ HTTPS: Passes through as-is (TLS passthrough) │ │
│ └─────────────────────────────────────────────────────────────┘ │
└────────────────────────────────────────────────────────────────────┘
│
│ HTTP Request with Exchanged Token
▼
┌─────────────────┐
│ auth-target │
│ (validates aud: │
│ auth-target) │
└─────────────────┘
- Kubernetes cluster with the kagenti-operator installed
- Keycloak deployed in the
keycloaknamespace - SPIRE deployed (optional, for SPIFFE-based identity)
- AuthBridge images available from GitHub Container Registry:
ghcr.io/kagenti/kagenti-extensions/proxy-init:latestghcr.io/kagenti/kagenti-extensions/authbridge-unified:latestghcr.io/kagenti/kagenti-extensions/demo-app:latestghcr.io/kagenti/kagenti-extensions/client-registration:latest
Deploy the operator webhook following the kagenti-operator installation docs.
Then create the namespace and apply the required ConfigMaps:
kubectl create namespace team1
kubectl apply -f k8s/configmaps-webhook.yaml -n team1The ConfigMaps provide:
Note for custom deployments: TOKEN_URL and ISSUER are auto-derived from KEYCLOAK_URL + KEYCLOAK_REALM. Set ISSUER explicitly only when the internal KEYCLOAK_URL differs from the frontend URL that appears in token iss claims (split-horizon DNS). Audience validation is automatic using the agent's CLIENT_ID from /shared/client-id.txt.
The ConfigMaps include:
authbridge-config- Unified Keycloak configuration for both client-registration and envoy-proxy:KEYCLOAK_URL- Keycloak server URL (used by client-registration and to derive TOKEN_URL/ISSUER)KEYCLOAK_REALM- Keycloak realm nameTOKEN_URL- Keycloak token endpoint (optional, auto-derived from KEYCLOAK_URL + KEYCLOAK_REALM)ISSUER- Expected JWT issuer for inbound validation (optional, auto-derived or set explicitly for split-horizon DNS)- Audience validation uses CLIENT_ID from
/shared/client-id.txtautomatically (no configuration needed) - Target audience and scopes for outbound token exchange are configured per-route in the
authproxy-routesConfigMap
spiffe-helper-config- SPIFFE helper configuration (for SPIRE mode)envoy-config- Envoy proxy configuration
| Label | Value | Description |
|---|---|---|
kagenti.io/type |
agent |
Required: Identifies workload as an agent |
kagenti.io/inject |
enabled |
Enable AuthBridge sidecar injection |
kagenti.io/inject |
disabled |
Disable injection (for target services) |
kagenti.io/spire |
enabled |
Enable SPIFFE-based identity with SPIRE |
kagenti.io/spire |
disabled |
Use static client ID (no SPIRE) |
Note: All labels must be on the Pod template (spec.template.metadata.labels), not the Deployment metadata.
To use the combined authbridge container instead of separate sidecars, enable the combinedSidecar feature gate:
See the kagenti-operator docs for Helm-based configuration.
# Edit the feature gates ConfigMap directly
kubectl edit configmap kagenti-webhook-feature-gates -n kagenti-systemAdd combinedSidecar: true to the feature-gates.yaml data key:
data:
feature-gates.yaml: |
globalEnabled: true
envoyProxy: true
spiffeHelper: true
clientRegistration: true
combinedSidecar: trueThe webhook watches this ConfigMap for changes and reloads automatically. New pods created after the change will use combined mode. Existing pods are not affected — delete and recreate them to switch.
| Aspect | Separate mode | Combined mode |
|---|---|---|
| Sidecar containers | 3 (envoy-proxy, spiffe-helper, kagenti-client-registration) |
1 (authbridge) |
| Init containers | 1 (proxy-init) |
1 (proxy-init) |
| Container to read credentials | -c envoy-proxy |
-c authbridge |
| Container for Envoy logs | -c envoy-proxy |
-c authbridge |
| Per-sidecar opt-out labels | Each sidecar can be independently disabled | spiffeHelper and clientRegistration are passed as flags to the entrypoint; envoy-proxy disabled = no combined container |
| Image | authbridge-unified + spiffe-helper + client-registration |
authbridge (single image) |
When combinedSidecar: true, the per-sidecar feature gates and workload labels still work:
spiffeHelper: falseorkagenti.io/spiffe-helper-inject: "false": The combined container starts withSPIRE_ENABLED=false— spiffe-helper is not launched, and a static client ID is used instead.clientRegistration: false: The combined container starts withCLIENT_REGISTRATION_ENABLED=false— client registration is skipped.- Default (label not
true): operator-managed registration; combined authbridge usesCLIENT_REGISTRATION_ENABLED=falseunlesskagenti.io/client-registration-inject: "true"opts in to the legacy registration slice. envoyProxy: falseorkagenti.io/envoy-proxy-inject: "false": No combined container is injected at all (the proxy is the core component).
Then continue with:
- Step 1: Setup Keycloak - Configure Keycloak clients and scopes
- Step 2: Deploy Auth Target and Agent - Deploy the demo workloads
- Step 3: Test Token Exchange - Verify the flow works
Run the Keycloak setup script to configure the realm, clients, and scopes:
cd authbridge
# Activate virtual environment
python -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
cd demos/webhook
# Run setup for webhook deployment (default: team1 namespace, agent service account)
python setup_keycloak.pyOr specify custom namespace/service account:
python setup_keycloak.py --namespace myapp --service-account mysaThis creates:
auth-targetclient (target audience for token exchange)agent-<namespace>-<sa>-audscope (adds agent's SPIFFE ID to token audience)auth-target-audscope (adds "auth-target" to exchanged tokens)alicedemo user (for testing subject preservation)
Deploy the target service and agent workload:
# Deploy auth-target (validates exchanged tokens)
# Note: auth-target has kagenti.io/inject: disabled to prevent sidecar injection
kubectl apply -f k8s/auth-target-deployment-webhook.yaml
# Deploy agent - choose ONE of the following:
# Option A: With SPIFFE (requires SPIRE)
kubectl apply -f k8s/agent-deployment-webhook.yaml
# Option B: Without SPIFFE (uses static client ID)
kubectl apply -f k8s/agent-deployment-webhook-no-spiffe.yaml
# Wait for the pods to be ready:
kubectl wait --for=condition=available --timeout=180s deployment/auth-target -n team1
kubectl wait --for=condition=available --timeout=180s deployment/agent -n team1Verify the injected containers:
kubectl get pod -n team1 -l app=agent -o jsonpath='{.items[0].spec.containers[*].name}'
# Expected (separate mode, with SPIFFE): agent spiffe-helper kagenti-client-registration envoy-proxy
# Expected (separate mode, without SPIFFE): agent kagenti-client-registration envoy-proxy
# Expected (combined mode): agent authbridgeThese tests verify both inbound JWT validation and outbound token exchange end-to-end. By sending requests from outside the agent pod, each request exercises the full pipeline:
- Inbound: Envoy intercepts the incoming request, ext-proc validates the JWT (signature + issuer)
- Outbound: auth-proxy forwards to auth-target, Envoy intercepts the outgoing request, ext-proc exchanges the token
# Start a test client pod (sends requests from outside the agent pod)
kubectl run test-client --image=nicolaka/netshoot -n team1 --restart=Never -- sleep 3600
kubectl wait --for=condition=ready pod/test-client -n team1 --timeout=30s
# Get the agent's client credentials from the sidecar container with the shared volume.
# Use -c authbridge in combined mode, or -c envoy-proxy in separate mode.
CLIENT_ID=$(kubectl exec deployment/agent -n team1 -c envoy-proxy -- cat /shared/client-id.txt)
CLIENT_SECRET=$(kubectl exec deployment/agent -n team1 -c envoy-proxy -- cat /shared/client-secret.txt)
echo "Client ID: $CLIENT_ID"
# Get a service account token (using test-client which has curl)
TOKEN=$(kubectl exec test-client -n team1 -- curl -s -X POST \
"http://keycloak-service.keycloak.svc:8080/realms/kagenti/protocol/openid-connect/token" \
-d "grant_type=client_credentials" \
-d "client_id=$CLIENT_ID" \
-d "client_secret=$CLIENT_SECRET" | jq -r '.access_token')
# Get a user token for alice (for subject preservation test)
USER_TOKEN=$(kubectl exec test-client -n team1 -- curl -s -X POST \
"http://keycloak-service.keycloak.svc:8080/realms/kagenti/protocol/openid-connect/token" \
-d "grant_type=password" \
-d "client_id=$CLIENT_ID" \
-d "client_secret=$CLIENT_SECRET" \
-d "username=alice" \
-d "password=alice123" | jq -r '.access_token')kubectl exec test-client -n team1 -- curl -s http://agent-service:8080/test
# Expected: {"error":"unauthorized","message":"missing Authorization header"}kubectl exec test-client -n team1 -- curl -s -H "Authorization: Bearer invalid-token" http://agent-service:8080/test
# Expected: {"error":"unauthorized","message":"token validation failed: ..."}Inbound validation passes, outbound token exchange converts aud: <agent SPIFFE ID> → aud: auth-target:
kubectl exec test-client -n team1 -- curl -s -H "Authorization: Bearer $TOKEN" http://agent-service:8080/test
# Expected: "authorized"Same as 5c, but using alice's user token. The sub and preferred_username claims are preserved through token exchange:
kubectl exec test-client -n team1 -- curl -s -H "Authorization: Bearer $USER_TOKEN" http://agent-service:8080/test
# Expected: "authorized"kubectl delete pod test-client -n team1 --ignore-not-foundRun all tests as a single script:
kubectl run test-client --image=nicolaka/netshoot -n team1 --restart=Never -- sleep 3600 2>/dev/null
kubectl wait --for=condition=ready pod/test-client -n team1 --timeout=30s
CLIENT_ID=$(kubectl exec deployment/agent -n team1 -c envoy-proxy -- cat /shared/client-id.txt)
CLIENT_SECRET=$(kubectl exec deployment/agent -n team1 -c envoy-proxy -- cat /shared/client-secret.txt)
TOKEN=$(kubectl exec test-client -n team1 -- curl -s -X POST \
"http://keycloak-service.keycloak.svc:8080/realms/kagenti/protocol/openid-connect/token" \
-d "grant_type=client_credentials" -d "client_id=$CLIENT_ID" -d "client_secret=$CLIENT_SECRET" | jq -r '.access_token')
USER_TOKEN=$(kubectl exec test-client -n team1 -- curl -s -X POST \
"http://keycloak-service.keycloak.svc:8080/realms/kagenti/protocol/openid-connect/token" \
-d "grant_type=password" -d "client_id=$CLIENT_ID" -d "client_secret=$CLIENT_SECRET" \
-d "username=alice" -d "password=alice123" | jq -r '.access_token')
echo "=== 5a. No Token (expect 401) ==="
kubectl exec test-client -n team1 -- curl -s http://agent-service:8080/test
echo ""
echo "=== 5b. Invalid Token (expect 401) ==="
kubectl exec test-client -n team1 -- curl -s -H "Authorization: Bearer invalid-token" http://agent-service:8080/test
echo ""
echo "=== 5c. Service Account Token (expect authorized) ==="
kubectl exec test-client -n team1 -- curl -s -H "Authorization: Bearer $TOKEN" http://agent-service:8080/test
echo ""
echo "=== 5d. User Token - alice (expect authorized) ==="
kubectl exec test-client -n team1 -- curl -s -H "Authorization: Bearer $USER_TOKEN" http://agent-service:8080/test
echo ""
kubectl delete pod test-client -n team1 --ignore-not-foundkubectl get pods -n team1
kubectl describe pod -l app=agent -n team1Separate mode (default):
kubectl logs deployment/agent -n team1 -c kagenti-client-registration
kubectl logs deployment/agent -n team1 -c envoy-proxy | grep -E "(Token Exchange|error)"
kubectl logs deployment/agent -n team1 -c spiffe-helperCombined mode (combinedSidecar: true):
# All sidecar logs are in one container
kubectl logs deployment/agent -n team1 -c authbridge
# Filter by component
kubectl logs deployment/agent -n team1 -c authbridge | grep "\[AuthBridge\]"
kubectl logs deployment/agent -n team1 -c authbridge | grep "Token Exchange"-
"Requested audience not available: auth-target"
- Ensure the route entry in
authproxy-routesincludesauth-target-audintoken_scopes - Run
setup_keycloak.pyagain to create the required scopes
- Ensure the route entry in
-
ConfigMap not found errors
- Apply
k8s/configmaps-webhook.yamlto the target namespace
- Apply
-
Image pull errors
- Images are automatically pulled from
ghcr.io/kagenti/kagenti-extensions/ - If you need to build locally for development:
cd authbridge/authproxy make build # Load into Kind cluster kind load docker-image --name <cluster> localhost/proxy-init:latest kind load docker-image --name <cluster> localhost/authbridge-unified:latest
- See the kagenti-operator for image configuration
- Images are automatically pulled from
-
SPIFFE credentials not ready
- Ensure SPIRE is deployed and the workload is registered
- Check spiffe-helper logs for connection issues
To remove all resources created during this demo:
# Delete agent and auth-target deployments
kubectl delete deployment agent -n team1
kubectl delete deployment auth-target -n team1
kubectl delete service auth-target-service -n team1
kubectl delete serviceaccount agent -n team1kubectl delete configmap authbridge-config -n team1
kubectl delete configmap envoy-config -n team1
kubectl delete configmap spiffe-helper-config -n team1If you want to clean up Keycloak clients and scopes:
# Get admin token
ADMIN_TOKEN=$(curl -s http://keycloak.localtest.me:8080/realms/master/protocol/openid-connect/token \
-d "grant_type=password" \
-d "client_id=admin-cli" \
-d "username=admin" \
-d "password=admin" | jq -r ".access_token")
# Delete the dynamically registered agent client
CLIENT_ID="spiffe://localtest.me/ns/team1/sa/agent"
INTERNAL_ID=$(curl -s -H "Authorization: Bearer $ADMIN_TOKEN" \
"http://keycloak.localtest.me:8080/admin/realms/kagenti/clients?clientId=$CLIENT_ID" | jq -r ".[0].id")
curl -s -X DELETE -H "Authorization: Bearer $ADMIN_TOKEN" \
"http://keycloak.localtest.me:8080/admin/realms/kagenti/clients/$INTERNAL_ID"
# Delete auth-target client
AUTH_TARGET_ID=$(curl -s -H "Authorization: Bearer $ADMIN_TOKEN" \
"http://keycloak.localtest.me:8080/admin/realms/kagenti/clients?clientId=auth-target" | jq -r ".[0].id")
curl -s -X DELETE -H "Authorization: Bearer $ADMIN_TOKEN" \
"http://keycloak.localtest.me:8080/admin/realms/kagenti/clients/$AUTH_TARGET_ID"
# Delete demo user alice
ALICE_ID=$(curl -s -H "Authorization: Bearer $ADMIN_TOKEN" \
"http://keycloak.localtest.me:8080/admin/realms/kagenti/users?username=alice" | jq -r ".[0].id")
curl -s -X DELETE -H "Authorization: Bearer $ADMIN_TOKEN" \
"http://keycloak.localtest.me:8080/admin/realms/kagenti/users/$ALICE_ID"
echo "Keycloak resources cleaned up"If you created a dedicated namespace for this demo:
# This will delete everything in the namespace
kubectl delete namespace team1To remove the webhook, see the kagenti-operator uninstall instructions.
For a complete cleanup including the namespace:
# Delete namespace (removes all resources inside)
kubectl delete namespace team1