Deities is a Go application that monitors Docker registries for image updates and automatically triggers rollouts of Kubernetes deployments when new images are pushed. It uses image digests (not tags) to detect updates, ensuring accurate change detection even when tags are reused.
- 🔍 Digest-based monitoring - Uses image digests instead of tags for accurate update detection
- 🔄 Automatic rollouts - Triggers Kubernetes deployment rollouts when new images are detected
- 🐳 Multi-registry support - Works with Docker Hub, GCR, ECR, and other Docker registries
- 🔐 Authentication support - Supports private registries with username/password authentication
- ⏱️ Configurable intervals - Set custom check intervals for monitoring
- 📊 Colorful logging - Beautiful, structured logging with pterm integration
- 🎨 Pretty output - Colorful ASCII logo and well-formatted logs
- 🏗️ Clean architecture - Built with Uber's fx dependency injection framework
- 🔧 Environment variables - Override configuration via environment variables
- Go 1.25 or later
- Access to Kubernetes cluster (in-cluster or via kubeconfig)
- Deployments must have
imagePullPolicy: Always
git clone https://github.com/parham/deities.git
cd deities
go mod download
go build -o deities .
go mod download
Create a config.toml
file with your repositories and deployments:
# Logger configuration
[logger]
level = "info" # Options: debug, info, warn, error
# Kubernetes configuration
[k8s]
kubeconfig = "$HOME/.kube/config" # Path to kubeconfig (empty for in-cluster config)
# Controller configuration
[controller]
check_interval = "5m" # How often to check for image updates
# Registries configuration (define registry addresses and authentication)
[[controller.registries]]
name = "https://registry-1.docker.io" # Docker Hub
[[controller.registries]]
name = "https://gcr.io"
[controller.registries.auth]
username = "_json_key"
password = "${GCR_JSON_KEY}"
# Images to monitor (each image references a registry)
[[controller.images]]
name = "nginx"
registry = "https://registry-1.docker.io"
tag = "latest"
[[controller.images]]
name = "nginx"
registry = "https://registry-1.docker.io"
tag = "stable"
[[controller.images]]
name = "myorg/myapp"
registry = "https://registry-1.docker.io"
tag = "stable"
# Deployments to manage
[[controller.deployments]]
name = "nginx-deployment"
namespace = "default"
container = "nginx"
image = "nginx"
[[controller.deployments]]
name = "myapp-deployment"
namespace = "production"
container = "app"
image = "myorg/myapp"
level
: Log level (options: "debug", "info", "warn", "error")
kubeconfig
: Path to kubeconfig file (leave empty for in-cluster config)
check_interval
: How often to check for updates (e.g., "5m", "1h", "30s")
name
: Registry address (e.g., "https://registry-1.docker.io" for Docker Hub)auth
: Optional authentication credentialsusername
: Registry usernamepassword
: Registry password
name
: Image name (for Docker Hub, omit "library/" prefix for official images)registry
: Reference to a registry defined in[[controller.registries]]
tag
: Image tag to monitor (e.g., "latest", "stable")
name
: Deployment name in Kubernetesnamespace
: Kubernetes namespacecontainer
: Container name within the deploymentimage
: Image prefix to match against images
Configuration can also be set via environment variables with the deities_
prefix. Use double underscores (__
) to represent nested fields:
# Examples:
export deities_logger__level=debug
export deities_k8s__kubeconfig=/path/to/kubeconfig
export deities_controller__check_interval=10m
Environment variables override values from config.toml
.
./deities
Deploy Deities using the provided Helm chart:
helm install deities ./charts/deities -n deities-system --create-namespace
The chart is fully customizable via values.yaml
. Key configuration options:
RBAC Configuration - Choose between cluster-wide or namespace-scoped permissions:
rbac:
create: true
clusterWide: true # Set to false for namespace-only access
clusterWide: true
- Creates ClusterRole/ClusterRoleBinding (manages deployments across all namespaces)clusterWide: false
- Creates Role/RoleBinding (manages deployments only in release namespace)
Application Configuration - The config
section is automatically converted to TOML:
config:
logger:
level: info
k8s:
kubeconfig: "" # Empty for in-cluster config
controller:
check_interval: "5m"
registries:
- name: "https://registry-1.docker.io"
- name: "https://gcr.io"
auth:
username: "_json_key"
password: "YOUR_GCR_JSON_KEY"
images:
- name: "nginx"
registry: "https://registry-1.docker.io"
tag: "latest"
deployments:
- name: "nginx-deployment"
namespace: "default"
container: "nginx"
image: "nginx"
Using Secrets for Credentials:
kubectl create secret generic registry-creds --from-literal=gcr-key='YOUR_JSON_KEY'
extraEnv:
- name: deities_controller__registries__1__auth__password
valueFrom:
secretKeyRef:
name: registry-creds
key: gcr-key
The Helm chart supports two methods for providing registry authentication credentials:
Method 1: Direct Values (Simple)
Provide credentials directly in your values.yaml
:
config:
controller:
registries:
- name: "https://my-registry.example.com"
auth:
username: "myuser"
password: "mypassword"
Pros: Simple and straightforward Cons: Credentials are stored in plain text in values files
Method 2: Kubernetes Secrets (Recommended)
Reference credentials from a Kubernetes secret:
Step 1: Create a Secret
# Using kubectl
kubectl create secret generic my-registry-credentials \
--from-literal=username="myuser" \
--from-literal=password="mypassword"
# Or for GCR with JSON key
kubectl create secret generic gcr-credentials \
--from-literal=username="_json_key" \
--from-file=password=./gcr-key.json
Step 2: Reference in values.yaml
config:
controller:
registries:
- name: "https://gcr.io"
auth:
secretRef:
name: gcr-credentials
# usernameKey: username # optional, defaults to "username"
# passwordKey: password # optional, defaults to "password"
Pros: Secure, credentials not exposed in config files Cons: Requires additional step to create secrets
Custom Secret Keys
If your secret uses different key names:
config:
controller:
registries:
- name: "https://my-registry.example.com"
auth:
secretRef:
name: custom-credentials
usernameKey: registry-user
passwordKey: registry-pass
Mixed Approach
You can use both methods in the same configuration:
config:
controller:
registries:
# Public registry (no auth)
- name: "https://registry-1.docker.io"
# Private registry with direct auth (dev/test)
- name: "https://dev-registry.example.com"
auth:
username: "devuser"
password: "devpassword"
# Production registry with secret
- name: "https://gcr.io"
auth:
secretRef:
name: gcr-credentials
How It Works
When you use secretRef
:
- The chart automatically injects environment variables from the specified secret
- The config file uses environment variable placeholders (e.g.,
${DEITIES_REGISTRY_0_USERNAME}
) - The application expands these variables at runtime
- Credentials are never stored in ConfigMaps
Complete Example
See examples/registry-auth-secret.yaml for a complete working example with:
- Secret manifests for different registry types
- Multiple registry configurations
- Custom secret key names
Troubleshooting Registry Authentication
Secret not found error
Make sure the secret exists in the same namespace as the Deities deployment:
kubectl get secret <secret-name> -n <namespace>
Authentication still failing
Verify the secret contains the correct keys:
kubectl get secret <secret-name> -o jsonpath='{.data}' | jq
Check environment variables
Verify the environment variables are correctly injected:
kubectl describe pod <deities-pod-name>
Look for the DEITIES_REGISTRY_*
environment variables in the output.
See charts/deities/values.yaml
for all available configuration options.
All deployments managed by Deities must have imagePullPolicy: Always
.
This ensures that when the deployment is updated with a new digest, Kubernetes will pull
the latest image from the registry.
spec:
containers:
- name: myapp
image: myapp:latest
imagePullPolicy: Always # REQUIRED
Without imagePullPolicy: Always
, the deployment will fail to update, and Deities will stick in a loop.
Deities uses image digests (SHA256 hashes) rather than tags to detect updates. This means:
- Even if a tag like
latest
is reused, Deities will detect the new image - Updates are based on actual image content changes, not tag changes
- Deployments are updated with digest references (e.g.,
nginx@sha256:abc123...
)
- Initial Scan: On startup, Deities fetches the current digest for each configured repository
- Periodic Checks: At the configured interval, it queries each registry for the latest digest
- Change Detection: If a digest has changed, it indicates a new image was pushed
- Deployment Update: Deities updates matching Kubernetes deployments with the new image digest
- Automatic Rollout: Kubernetes automatically rolls out the updated deployment
If you see authentication errors:
- Verify your username and password are correct
- For Docker Hub, you may need to use an access token instead of your password
- For private registries, ensure you're using the correct authentication method
If deployments aren't updating:
- Check that
imagePullPolicy: Always
is set - Verify the image prefix in the deployment config matches the repository
- Check logs for errors
If you see Kubernetes permission errors:
- Ensure the ServiceAccount has appropriate RBAC permissions
- Verify the ClusterRole includes
get
,list
,update
, andpatch
on deployments
deities/
├── main.go # Application entry point with fx setup
├── internal/
│ ├── config/
│ │ ├── config.go # Centralized configuration with fx.Out
│ │ └── default.go # Default configuration values
│ ├── logger/
│ │ └── logger.go # Logger module with pterm integration
│ ├── logo/
│ │ └── logo.go # ASCII logo with pterm
│ ├── registry/
│ │ └── client.go # Docker registry client
│ ├── k8s/
│ │ └── client.go # Kubernetes client
│ └── controller/
│ └── controller.go # Main controller logic
├── config.toml # Example configuration
└── README.md
Each module follows the dependency injection pattern using Uber's fx framework:
- Each module provides a
Provide()
function for fx dependency injection - Configuration is modular - each module defines its own
Config
struct - All modules are wired together in
main.go
using fx