diff --git a/CLI.md b/CLI.md new file mode 100644 index 0000000..87d8218 --- /dev/null +++ b/CLI.md @@ -0,0 +1,145 @@ +# CLI Reference + +This document describes all available command-line options for Dispenser. + +## Usage + +```sh +dispenser [OPTIONS] +``` + +## Options + +### `-c, --config ` + +Specify the path to the configuration file. + +**Default:** `dispenser.toml` + +**Example:** +```sh +dispenser --config /etc/dispenser/my-config.toml +``` + +### `-t, --test` + +Test the configuration file and exit. This validates your configuration files (including variable substitution) to ensure there are no syntax errors or missing variables. + +**Example:** +```sh +dispenser --test +``` + +**Output on success:** +``` +Dispenser config is ok. +``` + +**Output on error:** +``` +---------------------------------- ----------------------------------- + 2 | + 3 | [service] + 4 | name = "nginx" + 5 > image = "${missing}/nginx:latest" + i ^^^^^^^^^^ undefined value +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +No referenced variables +------------------------------------------------------------------------------- +``` + +### `-p, --pid-file ` + +Specify the path to the PID file. This file is used to track the running Dispenser process and is required for sending signals with the `--signal` flag. + +**Default:** `dispenser.pid` + +**Example:** +```sh +dispenser --pid-file /var/run/dispenser.pid +``` + +### `-s, --signal ` + +Send a signal to the running Dispenser instance. This command relies on the PID file, so you should run it from the same directory where Dispenser is running (typically `/opt/dispenser` for the default installation). + +**Valid signals:** +- `reload` - Reload the `dispenser.toml` configuration without restarting the process +- `stop` - Gracefully stop the Dispenser daemon + +**Examples:** +```sh +# Reload configuration +dispenser --signal reload + +# Stop the daemon +dispenser --signal stop +``` + +### `-h, --help` + +Display help information about available options. + +**Example:** +```sh +dispenser --help +``` + +### `-V, --version` + +Display the current version of Dispenser. + +**Example:** +```sh +dispenser --version +``` + +## Common Usage Patterns + +### Running in Foreground (for testing) + +```sh +dispenser --config ./dispenser.toml +``` + +### Validating Configuration Before Deployment + +```sh +dispenser --test && echo "Configuration is valid!" +``` + +### Reloading Configuration After Changes + +```sh +# After editing dispenser.toml +dispenser --signal reload +``` + +### Using Custom Paths + +```sh +dispenser --config /etc/dispenser/production.toml --pid-file /var/run/dispenser-prod.pid +``` + +## Systemd Integration + +When Dispenser is installed via the `.deb` or `.rpm` package, it runs as a systemd service. You can manage it using standard systemd commands: + +```sh +# Start the service +sudo systemctl start dispenser + +# Stop the service +sudo systemctl stop dispenser + +# Restart the service +sudo systemctl restart dispenser + +# Check service status +sudo systemctl status dispenser + +# View logs +sudo journalctl -u dispenser -f +``` + +The systemd service automatically uses the configuration at `/opt/dispenser/dispenser.toml` and runs as the `dispenser` user. diff --git a/CRON.md b/CRON.md index 121f31a..63ca65d 100644 --- a/CRON.md +++ b/CRON.md @@ -1,77 +1,110 @@ # Using Cron for Scheduled Deployments -Dispenser provides a `cron` feature to schedule deployments or restarts of your services at specific intervals. This is useful for tasks that need to run periodically, such as batch jobs, or for ensuring services are restarted regularly for maintenance. +Dispenser supports cron scheduling to deploy or restart services at specific intervals. This is useful for batch jobs, backups, ETL processes, or periodic maintenance restarts. -## How it Works +## Configuration -You can add a `cron` attribute to any `[[instance]]` block in your `dispenser.toml` configuration file. The value of this attribute is a cron expression that defines the schedule for the deployment. +Add a `cron` field to the `[dispenser]` section in your `service.toml`: -When a `cron` schedule is defined for an instance, Dispenser will trigger a redeployment of the corresponding Docker Compose service according to the schedule. This is equivalent to running `docker-compose up -d --force-recreate` for the service. +```toml +[dispenser] +watch = false +initialize = "on-trigger" +cron = "0 0 2 * * *" # Every day at 2 AM +``` -The cron scheduler uses a 6-field format that includes seconds: +### Cron Expression Format + +Dispenser uses a 6-field format (with seconds): ``` ┌───────────── second (0 - 59) │ ┌───────────── minute (0 - 59) │ │ ┌───────────── hour (0 - 23) -│ │ │ ┌───────────── day of the month (1 - 31) +│ │ │ ┌───────────── day of month (1 - 31) │ │ │ │ ┌───────────── month (1 - 12) -│ │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday) -│ │ │ │ │ │ +│ │ │ │ │ ┌───────────── day of week (0 - 6, Sunday = 0) │ │ │ │ │ │ * * * * * * ``` -You can use online tools like [crontab.guru](https://crontab.guru/) to help generate the correct cron expression. Note that many online tools generate 5-field expressions, so you may need to add the seconds field (`*` or `0`) at the beginning. +**Common expressions:** +- `0 0 2 * * *` - Daily at 2 AM +- `0 0 */6 * * *` - Every 6 hours +- `0 30 9 * * 1-5` - Weekdays at 9:30 AM +- `0 0 0 1 * *` - First day of each month +- `*/10 * * * * *` - Every 10 seconds + +Use [crontab.guru](https://crontab.guru/) for help (add `0` for seconds field). + +## Examples + +### Scheduled Backup Job + +```toml +[service] +name = "backup-job" +image = "my-backup:latest" -## Use Cases +[[volume]] +source = "./backups" +target = "/backups" -### Scheduled-Only Deployments +restart = "no" -You can use `cron` without an `images` attribute. This is ideal for services that run on a schedule such as ETLs or batch processing tasks, and do not have a corresponding image to monitor for updates. +[dispenser] +watch = false +initialize = "on-trigger" +cron = "0 0 2 * * *" # Daily at 2 AM +``` -**Example:** -The following configuration will run the `hello-world` service every 10 seconds. Since there is no image to watch, the deployment is only triggered by the cron schedule. +### ETL Job Every Hour ```toml -# dispenser.toml +[service] +name = "etl-processor" +image = "my-etl:latest" +command = ["python", "process.py"] -[[instance]] -path = "hello-world" -cron = "*/10 * * * * *" +restart = "no" + +[dispenser] +watch = false +initialize = "on-trigger" +cron = "0 0 * * * *" # Every hour ``` -The `docker-compose.yaml` for this service might look like this. It is important to set `restart: no` to prevent the container from restarting automatically after its task is complete. It will wait for the next scheduled run from Dispenser. +### Periodic Restart with Image Watching + +```toml +[service] +name = "worker" +image = "my-worker:latest" -```yaml -# hello-world/docker-compose.yaml +restart = "always" -version: "3.8" -services: - hello-world: - image: hello-world - restart: no +[dispenser] +watch = true +initialize = "immediately" +cron = "0 0 4 * * *" # Restart daily at 4 AM ``` -### Scheduled Restarts with Image Monitoring +This configuration will: +- Deploy when a new image is detected +- Also restart daily at 4 AM (even if no new image) -You can use `cron` in combination with image monitoring. In this case, Dispenser will deploy a new version of your service under two conditions: -1. A new Docker image is detected in the registry. -2. The `cron` schedule is met. +## Options -This is useful for services that should be restarted periodically, even if no new image is available. +### `initialize` -**Example:** -The following configuration watches the `nginx:latest` image and also restarts the service every minute. +- `immediately` (default) - Start when Dispenser starts +- `on-trigger` - Only start when cron fires or image updates -```toml -# dispenser.toml +### `watch` -[[instance]] -path = "nginx" -# Will restart the service every minute or when the nginx image gets updated -cron = "0 */1 * * * *" -images = [{ registry = "docker.io", name = "nginx", tag = "latest" }] -``` +- `true` - Monitor registry for image updates +- `false` - Only run on cron schedule + +### `restart` -By using the `cron` feature, you can extend Dispenser's capabilities beyond continuous deployment to include scheduled task orchestration. You can find more examples in the `example` directory of the project. +Use `restart = "no"` for one-time jobs to prevent automatic restarts between scheduled runs. \ No newline at end of file diff --git a/Cargo.lock b/Cargo.lock index 51cb304..eb868ba 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -291,13 +291,14 @@ dependencies = [ [[package]] name = "dispenser" -version = "0.6.0" +version = "0.7.0" dependencies = [ "base64", "chrono", "clap", "cron", "env_logger", + "futures", "futures-util", "google-cloud-secretmanager-v1", "log", @@ -388,6 +389,7 @@ checksum = "65bc07b1a8bc7c85c5f2e110c476c7389b4554ba72af57d8445ea63a576b0876" dependencies = [ "futures-channel", "futures-core", + "futures-executor", "futures-io", "futures-sink", "futures-task", @@ -410,6 +412,17 @@ version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "05f29059c0c2090612e8d742178b0580d2dc940c837851ad723096f87af6663e" +[[package]] +name = "futures-executor" +version = "0.3.31" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1e28d1d997f585e54aebc3f97d39e72338912123a67330d723fdbb564d646c9f" +dependencies = [ + "futures-core", + "futures-task", + "futures-util", +] + [[package]] name = "futures-io" version = "0.3.31" @@ -445,10 +458,13 @@ version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9fa08315bb612088cc391249efdc3bc77536f16c91f6cf495e6fbe85b20a4a81" dependencies = [ + "futures-channel", "futures-core", + "futures-io", "futures-macro", "futures-sink", "futures-task", + "memchr", "pin-project-lite", "pin-utils", "slab", diff --git a/Cargo.toml b/Cargo.toml index a970e3a..55e8daa 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "dispenser" -version = "0.6.0" +version = "0.7.0" edition = "2021" license = "MIT" @@ -10,6 +10,7 @@ chrono = "0.4.42" clap = { version = "4.5.18", features = ["derive"] } cron = { version = "0.15.0", features = ["serde"] } env_logger = "0.11.5" +futures = "0.3.31" futures-util = "0.3.31" google-cloud-secretmanager-v1 = "1.2.0" log = "0.4.22" diff --git a/GCP.md b/GCP.md index 18cacde..f2bcf4c 100644 --- a/GCP.md +++ b/GCP.md @@ -1,17 +1,30 @@ # Using Google Secret Manager -Dispenser allows you to securely retrieve sensitive values, such as API keys or passwords, directly from Google Cloud Secret Manager. These secrets are accessed at runtime and injected into your configuration variables. +Dispenser can retrieve secrets from Google Cloud Secret Manager and use them in your configuration files. ## Prerequisites -To use this feature, the environment where Dispenser is running (e.g., a Google Compute Engine VM) must be authenticated with Google Cloud and have permission to access the secrets. +The environment where Dispenser runs must be authenticated with Google Cloud and have permission to access secrets: -1. **Service Account**: Ensure the Virtual Machine (VM) is running with a Service Account that has the **Secret Manager Secret Accessor** role (`roles/secretmanager.secretAccessor`). -2. **Authentication**: If running outside of GCP, you may need to set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable pointing to a service account key file. +1. **Service Account**: The VM must use a Service Account with the `roles/secretmanager.secretAccessor` role +2. **Authentication**: If running outside GCP, set `GOOGLE_APPLICATION_CREDENTIALS` to a service account key file ## Configuration -You can define secrets in your `dispenser.vars` file (or any `*.dispenser.vars` file). Instead of a plain string value, use a table to specify the secret source and details. +Define secrets in your `dispenser.vars` (or `*.dispenser.vars`) file: + +```toml +# dispenser.vars + +# Regular variables +registry = "gcr.io" +project = "my-project" + +# Secrets from Google Secret Manager +db_password = { source = "google", name = "projects/123456789012/secrets/DB_PASSWORD" } +api_key = { source = "google", name = "projects/123456789012/secrets/API_KEY" } +oauth_client = { source = "google", name = "projects/123456789012/secrets/OAUTH_CLIENT_ID", version = "2" } +``` ### Syntax @@ -19,48 +32,93 @@ You can define secrets in your `dispenser.vars` file (or any `*.dispenser.vars` variable_name = { source = "google", name = "projects/PROJECT_ID/secrets/SECRET_NAME" } ``` -- `source`: Must be set to `"google"`. -- `name`: The full resource name of the secret. This typically follows the format `projects//secrets/`. -- `version` (Optional): The version of the secret to retrieve. Defaults to `"latest"` if not specified. +- `source`: Must be `"google"` +- `name`: Full resource name of the secret +- `version`: (Optional) Secret version, defaults to `"latest"` -## Example +## Usage -Suppose you have a secret stored in Google Secret Manager that contains an OAuth Client ID. - -**1. Define the secret in `dispenser.vars` (or `*.dispenser.vars`):** +Use secrets like any other variable in your configuration files: ```toml -# dispenser.vars +# my-app/service.toml -# Regular variable -docker_registry = "docker.io" +[service] +name = "my-app" +image = "${registry}/${project}/my-app:latest" -# Secret variable from Google Secret Manager -oauth_client_id = { source = "google", name = "projects/123456789012/secrets/MY_OAUTH_CLIENT_ID" } +[env] +DATABASE_URL = "postgres://user:${db_password}@postgres:5432/mydb" +API_KEY = "${api_key}" +OAUTH_CLIENT_ID = "${oauth_client}" -# Secret variable with a specific version -db_password = { source = "google", name = "projects/123456789012/secrets/DB_PASSWORD", version = "2" } +[dispenser] +watch = true ``` -**2. Use the variable in `dispenser.toml` or `docker-compose.yaml`:** +## Setting Up Secrets -Once defined, these variables can be used just like any other variable in Dispenser. +### Enable API -In `dispenser.toml`: -```toml -[[instance]] -path = "my-service" -# ... +```sh +gcloud services enable secretmanager.googleapis.com --project=PROJECT_ID +``` + +### Create Secret + +```sh +# Create secret +gcloud secrets create DB_PASSWORD --project=PROJECT_ID + +# Add value +echo -n "my-secure-password" | gcloud secrets versions add DB_PASSWORD --data-file=- --project=PROJECT_ID +``` + +### Grant Access + +```sh +gcloud secrets add-iam-policy-binding DB_PASSWORD \ + --member="serviceAccount:SERVICE_ACCOUNT_EMAIL" \ + --role="roles/secretmanager.secretAccessor" \ + --project=PROJECT_ID ``` -In your service's `docker-compose.yaml`: -```yaml -services: - app: - image: my-app:latest - environment: - - CLIENT_ID=${oauth_client_id} - - DB_PASS=${db_password} +## Validation + +Test your configuration: + +```sh +dispenser --test ``` -When Dispenser runs, it will fetch the actual values from Google Secret Manager and make them available to your Docker Compose configuration. +This verifies: +- Connectivity to Secret Manager +- All secrets exist +- Proper permissions + +## Troubleshooting + +### Permission Denied + +Check service account has the correct role: + +```sh +gcloud projects add-iam-policy-binding PROJECT_ID \ + --member="serviceAccount:SERVICE_ACCOUNT_EMAIL" \ + --role="roles/secretmanager.secretAccessor" +``` + +### Secret Not Found + +Verify the secret exists: + +```sh +gcloud secrets list --project=PROJECT_ID +gcloud secrets describe SECRET_NAME --project=PROJECT_ID +``` + +### Test Access + +```sh +gcloud secrets versions access latest --secret="SECRET_NAME" --project="PROJECT_ID" +``` diff --git a/INSTALL.deb.md b/INSTALL.deb.md index fec3e6c..fd57ac6 100644 --- a/INSTALL.deb.md +++ b/INSTALL.deb.md @@ -21,7 +21,7 @@ wget ... ```sh -sudo apt install ./dispenser-0.2-0.x86_64.deb +sudo apt install ./dispenser-0.7-0.x86_64.deb ``` You can validate that it was successfully installed by switching to the diff --git a/INSTALL.redhat.md b/INSTALL.redhat.md index 7c015c9..3247dce 100644 --- a/INSTALL.redhat.md +++ b/INSTALL.redhat.md @@ -23,7 +23,7 @@ wget ... ```sh -sudo dnf install ./dispenser-0.2-0.x86_64.rpm +sudo dnf install ./dispenser-0.7-0.x86_64.rpm ``` You can validate that it was successfully installed by switching to the diff --git a/MIGRATION_GUIDE.md b/MIGRATION_GUIDE.md new file mode 100644 index 0000000..31abb14 --- /dev/null +++ b/MIGRATION_GUIDE.md @@ -0,0 +1,628 @@ +# Migration Guide: Docker Compose to service.toml + +This guide helps you migrate from the older Dispenser repository structure using `docker-compose.yaml` files to the new structure using `service.toml` files. + +## Overview + +The new structure replaces Docker Compose YAML files with TOML-based service configuration files. The key changes are: + +1. **Per-service configuration**: Each service now has its own `service.toml` file instead of `docker-compose.yaml` +2. **Network declarations**: Networks are now declared in `dispenser.toml` instead of in each `docker-compose.yaml` +3. **Simplified dispenser.toml**: The main configuration file is simplified - it only lists services by path and defines shared networks +4. **Service-level settings**: Image tracking, cron schedules, and initialization behavior are now defined in each `service.toml` +5. **Same interpolation syntax**: Variable interpolation using `${variable_name}` works exactly the same way + +## File Structure Comparison + +### Old Structure +``` +project/ +├── dispenser.toml # Contains service paths, images, cron, initialize settings +├── dispenser.vars # Variable definitions +└── service-name/ + └── docker-compose.yaml # Docker Compose service definition (with networks defined here) +``` + +### New Structure +``` +project/ +├── dispenser.toml # Contains service paths, polling delay, and network declarations +├── dispenser.vars # Variable definitions (unchanged) +└── service-name/ + └── service.toml # Complete service configuration (references networks) +``` + +## Main Configuration File Migration + +### dispenser.toml + +**Old format:** +```toml +delay = 60 + +[[instance]] +path = "nginx" +images = [{ registry = "${docker_io}", name = "nginx", tag = "latest" }] + +[[instance]] +path = "hello-world" +cron = "*/10 * * * * *" +initialize = "on-trigger" +``` + +**New format:** +```toml +# Delay in seconds between polling for new images (default: 60) +delay = 60 + +# Network declarations (optional) +[[network]] +name = "dispenser-net" +driver = "bridge" + +[[service]] +path = "nginx" + +[[service]] +path = "hello-world" +``` + +**Key changes:** +- `[[instance]]` → `[[service]]` +- Remove `images`, `cron`, and `initialize` fields (they move to `service.toml`) +- Keep only `path` to indicate service location +- Add `[[network]]` declarations at the top level (moved from docker-compose.yaml) + +### dispenser.vars + +**No changes required** - the variable file format remains the same: + +```toml +docker_io="docker.io" +nginx_port="8080" +``` + +Variable interpolation using `${variable_name}` syntax works identically in both formats. + +## Service Configuration Migration + +Each service directory needs a `service.toml` file to replace its `docker-compose.yaml`. + +### Example 1: Basic Web Service (nginx) + +**Old (docker-compose.yaml):** +```yaml +version: "3.8" +services: + nginx: + image: ${docker_io}/nginx:latest + ports: + - "8080:80" +``` + +**New (service.toml):** +```toml +# Service metadata (required) +[service] +name = "nginx-service" +image = "${docker_io}/nginx:latest" + +# Port mappings (optional) +[[port]] +host = 8080 +container = 80 + +# Network references (optional) +[[network]] +name = "dispenser-net" + +# Restart policy (optional, defaults to "no") +restart = "always" + +# Dispenser-specific configuration (required) +[dispenser] +# Watch for image updates +watch = true + +# Initialize immediately on startup (default behavior) +initialize = "immediately" +``` + +### Example 2: Scheduled Job (hello-world) + +**Old (docker-compose.yaml):** +```yaml +version: "3.8" +services: + hello-world: + image: hello-world + restart: no +``` + +**Old (dispenser.toml entry):** +```toml +[[instance]] +path = "hello-world" +cron = "*/10 * * * * *" +initialize = "on-trigger" +``` + +**New (service.toml):** +```toml +# Service metadata (required) +[service] +name = "hello-world-job" +image = "hello-world" + +# Network references (optional) +[[network]] +name = "dispenser-net" + +# Restart policy (optional, defaults to "no") +restart = "no" + +# Dispenser-specific configuration (required) +[dispenser] +# Don't watch for image updates +watch = false + +# Initialize only when triggered (by cron in this case) +initialize = "on-trigger" + +# Run every 10 seconds +cron = "*/10 * * * * *" +``` + +## Field Mapping Reference + +### Service-Level Fields + +| Docker Compose | service.toml | Notes | +|----------------|--------------|-------| +| `services..image` | `[service] image` | Same interpolation syntax | +| `services..ports` | `[[port]]` sections | One `[[port]]` per mapping | +| `services..volumes` | `[[volume]]` sections | One `[[volume]]` per mount | +| `services..environment` | `[env]` map | Key-value pairs in `[env]` section | +| `services..restart` | `restart` | Values: "no", "always", "on-failure", "unless-stopped" | +| `services..command` | `command` | String or array of strings | +| `services..entrypoint` | `entrypoint` | String or array of strings | +| `services..working_dir` | `working_dir` | String path | +| `services..user` | `user` | String (UID or UID:GID) | +| `services..hostname` | `hostname` | String | +| `services..networks` | `[[network]]` sections | One `[[network]]` per network reference | +| N/A | `memory` | New: Resource limits (e.g., "256m", "1g") | +| N/A | `cpus` | New: CPU limits (e.g., "0.5", "1.0") | + +### Dispenser-Specific Fields + +| Old Location | New Location | Notes | +|--------------|--------------|-------| +| `dispenser.toml: [[instance]].images` | `service.toml: [dispenser].watch` | `images` list → `watch = true/false` | +| `dispenser.toml: [[instance]].cron` | `service.toml: [dispenser].cron` | Same cron syntax | +| `dispenser.toml: [[instance]].initialize` | `service.toml: [dispenser].initialize` | Values: "immediately" or "on-trigger" | + +### Network Configuration + +| Old Location | New Location | Notes | +|--------------|--------------|-------| +| `docker-compose.yaml: networks` (top-level) | `dispenser.toml: [[network]]` | Networks declared centrally in main config | +| `docker-compose.yaml: services..networks` | `service.toml: [[network]]` sections | Services reference networks by name | + +## Complete Migration Examples + +### Example 3: Service with Volumes and Environment Variables + +**Old (docker-compose.yaml):** +```yaml +version: "3.8" +services: + webapp: + image: ${registry}/myapp:${version} + ports: + - "${app_port}:3000" + environment: + - NODE_ENV=production + - API_KEY=${api_key} + volumes: + - ./data:/app/data + - ./config:/app/config:ro + restart: unless-stopped +``` + +**New (service.toml):** +```toml +[service] +name = "webapp" +image = "${registry}/myapp:${version}" +memory = "512m" +cpus = "1.0" + +[[port]] +host = "${app_port}" +container = 3000 + +[env] +NODE_ENV = "production" +API_KEY = "${api_key}" + +[[volume]] +source = "./data" +target = "/app/data" + +[[volume]] +source = "./config" +target = "/app/config" +readonly = true + +[[network]] +name = "app-network" + +restart = "unless-stopped" + +[dispenser] +watch = true +initialize = "immediately" +``` + +**dispenser.toml entry:** +```toml +# Network declaration (moved from docker-compose.yaml) +[[network]] +name = "backend" +driver = "bridge" + +[[service]] +path = "postgres" +``` + +**dispenser.toml entry:** +```toml +[[network]] +name = "app-network" +driver = "bridge" + +[[service]] +path = "webapp" +``` + +### Example 4: Database Service with Networks + +**Old (docker-compose.yaml):** +```yaml +version: "3.8" +services: + postgres: + image: postgres:15 + ports: + - "5432:5432" + environment: + - POSTGRES_PASSWORD=${db_password} + - POSTGRES_USER=${db_user} + - POSTGRES_DB=${db_name} + volumes: + - pgdata:/var/lib/postgresql/data + networks: + - backend + restart: always + +volumes: + pgdata: + +networks: + backend: + driver: bridge +``` + +**New (service.toml):** +```toml +[service] +name = "postgres-db" +image = "postgres:15" +memory = "1g" +cpus = "2.0" + +[[port]] +host = 5432 +container = 5432 + +[env] +POSTGRES_PASSWORD = "${db_password}" +POSTGRES_USER = "${db_user}" +POSTGRES_DB = "${db_name}" + +[[volume]] +source = "pgdata" +target = "/var/lib/postgresql/data" + +[[network]] +name = "backend" + +restart = "always" + +[dispenser] +watch = true +initialize = "immediately" +``` + +### Example 5: Custom Command and Entrypoint + +**Old (docker-compose.yaml):** +```yaml +version: "3.8" +services: + worker: + image: ${docker_io}/python:3.11 + command: ["python", "worker.py", "--verbose"] + working_dir: /app + volumes: + - ./src:/app + restart: on-failure +``` + +**New (service.toml):** +```toml +[service] +name = "worker" +image = "${docker_io}/python:3.11" +command = ["python", "worker.py", "--verbose"] +working_dir = "/app" +memory = "256m" +cpus = "0.5" + +[[volume]] +source = "./src" +target = "/app" + +restart = "on-failure" + +[dispenser] +watch = true +initialize = "immediately" +``` + +### Example 6: One-Shot Task with Cron + +**Old (docker-compose.yaml):** +```yaml +version: "3.8" +services: + backup: + image: backup-tool:latest + volumes: + - ./backups:/backups + - ./data:/data:ro + restart: no +``` + +**Old (dispenser.toml entry):** +```toml +[[instance]] +path = "backup" +cron = "0 0 2 * * *" # Daily at 2 AM +initialize = "on-trigger" +images = [{ registry = "docker.io", name = "backup-tool", tag = "latest" }] +``` + +**New (service.toml):** +```toml +[service] +name = "backup-job" +image = "backup-tool:latest" +memory = "128m" +cpus = "0.5" + +[[volume]] +source = "./backups" +target = "/backups" + +[[volume]] +source = "./data" +target = "/data" +readonly = true + +restart = "no" + +[dispenser] +watch = true +initialize = "on-trigger" +cron = "0 0 2 * * *" # Daily at 2 AM +``` + +## Network Migration + +Networks are handled differently in the new structure. Instead of defining networks in each `docker-compose.yaml` file, they are now declared centrally in `dispenser.toml` and referenced by services. + +### Network Declaration Migration + +**Old approach** - Networks defined in docker-compose.yaml: +```yaml +version: "3.8" +services: + web: + image: nginx + networks: + - frontend + - backend + + db: + image: postgres + networks: + - backend + +networks: + frontend: + driver: bridge + backend: + driver: bridge +``` + +**New approach** - Networks declared in dispenser.toml: + +```toml +# dispenser.toml +delay = 60 + +# Declare all networks used by services +[[network]] +name = "frontend" +driver = "bridge" + +[[network]] +name = "backend" +driver = "bridge" + +[[service]] +path = "web" + +[[service]] +path = "db" +``` + +Then services reference these networks in their `service.toml`: + +```toml +# web/service.toml +[service] +name = "web" +image = "nginx" + +[[network]] +name = "frontend" + +[[network]] +name = "backend" + +[dispenser] +watch = true +initialize = "immediately" +``` + +```toml +# db/service.toml +[service] +name = "db" +image = "postgres" + +[[network]] +name = "backend" + +[dispenser] +watch = true +initialize = "immediately" +``` + +### Key Points + +1. **Central declaration**: All networks must be declared in `dispenser.toml` using `[[network]]` sections +2. **Service references**: Services reference networks using `[[network]]` sections (not an array) +3. **Multiple networks**: A service can reference multiple networks by having multiple `[[network]]` sections +4. **Network attributes**: Currently supported attributes in `dispenser.toml`: + - `name` (required): The network name + - `driver` (optional): Network driver (e.g., "bridge", "host", "overlay") +5. **Default network**: If no networks are specified, Docker uses a default network + +### Network Array Syntax vs Section Syntax + +Note the syntax difference for network references in service configuration: + +**Old docker-compose.yaml (array syntax):** +```yaml +services: + app: + networks: + - backend + - frontend +``` + +**New service.toml (section syntax):** +```toml +[[network]] +name = "backend" + +[[network]] +name = "frontend" +``` + +Each network reference requires its own `[[network]]` section with a `name` field. + +## Migration Checklist + +For each service in your project: + +- [ ] Create a new `service.toml` file in the service directory +- [ ] Copy the `[service]` section fields from `docker-compose.yaml`: + - [ ] `image` (with interpolation if used) + - [ ] `ports` → `[[port]]` sections + - [ ] `volumes` → `[[volume]]` sections + - [ ] `environment` → `[env]` map + - [ ] `restart` policy + - [ ] Other fields (`command`, `entrypoint`, `working_dir`, etc.) +- [ ] Add `[dispenser]` section with: + - [ ] `watch = true/false` (was `images` list present?) + - [ ] `initialize` (was it in `dispenser.toml`?) + - [ ] `cron` (if present in `dispenser.toml`) +- [ ] Optional: Add `memory` and `cpus` limits +- [ ] Update `dispenser.toml`: + - [ ] Change `[[instance]]` to `[[service]]` + - [ ] Remove all fields except `path` + - [ ] Add `[[network]]` declarations for any networks used (moved from docker-compose.yaml) +- [ ] Update service network references: + - [ ] Change `networks = ["name"]` array to `[[network]]` sections with `name` field +- [ ] Delete the old `docker-compose.yaml` file +6. **Volume readonly**: + - Old: `./config:/app/config:ro` + - New: `readonly = true` field in volume section +7. **Environment variables**: + - Old: `environment:` array in YAML + - New: `[env]` map with key-value pairs where keys are variable names +8. **Networks**: + - Old: Networks defined in each `docker-compose.yaml` file + - New: Networks declared centrally in `dispenser.toml` with `[[network]]`, services reference them with `[[network]]` sections +9. **Network references**: + - Old: `networks: ["backend"]` array in YAML + - New: `[[network]]` sections with `name` field in service.toml +10. **Resource limits**: New format supports `memory` and `cpus` fields that weren't available in the old format + +## Important Notes + +1. **Variable interpolation is identical**: Both formats use `${variable_name}` syntax +2. **Cron syntax unchanged**: The cron expression format remains the same +3. **Initialize values**: Use `"immediately"` or `"on-trigger"` (case-insensitive, can use underscores or hyphens) +4. **Watch behavior**: + - Old: Presence of `images` array meant watching for updates + - New: Explicit `watch = true/false` field +5. **Port syntax**: + - Old: `"8080:80"` in YAML + - New: `host = 8080` and `container = 80` in separate fields +6. **Volume readonly**: + - Old: `./config:/app/config:ro` + - New: `readonly = true` field in volume section +7. **Resource limits**: New format supports `memory` and `cpus` fields that weren't available in the old format + +## Troubleshooting + +### Common Issues + +**Issue**: Service not starting after migration +- **Check**: Verify all required fields are present in `[service]` section +- **Check**: Ensure `[dispenser]` section exists with `initialize` field + +**Issue**: Variables not interpolating +- **Check**: Variable names in `dispenser.vars` match those in `service.toml` +- **Check**: Syntax is `${variable_name}` not `$variable_name` or `{variable_name}` + +**Issue**: Cron jobs not triggering +- **Check**: `initialize = "on-trigger"` is set in `[dispenser]` section +- **Check**: `cron` field has valid cron expression + +**Issue**: Image updates not detected +- **Check**: `watch = true` in `[dispenser]` section +- **Check**: `delay` value in main `dispenser.toml` is reasonable + +## Additional Resources + +For more examples, compare the provided example directories: +- `example-old/` - Shows the old docker-compose structure +- `example-new/` - Shows the new service.toml structure + +Both directories contain functionally equivalent configurations that can serve as reference implementations. \ No newline at end of file diff --git a/NETWORKS.md b/NETWORKS.md new file mode 100644 index 0000000..8b4777e --- /dev/null +++ b/NETWORKS.md @@ -0,0 +1,407 @@ +# Network Configuration Reference + +This document describes how to configure Docker networks in Dispenser. + +## Overview + +Dispenser supports Docker networks to enable communication between services. Networks are declared in `dispenser.toml` and referenced in individual service configurations. + +## Network Declaration + +Networks must be declared in your `dispenser.toml` file before they can be referenced by services. + +### Basic Network Declaration + +```toml +[[network]] +name = "app-network" +driver = "bridge" +``` + +### Complete Network Configuration + +```toml +[[network]] +name = "app-network" +driver = "bridge" +external = false +internal = false +attachable = true + +[network.labels] +app = "myapp" +environment = "production" +``` + +## Configuration Fields + +### `name` (required) + +The name of the network. This is used to reference the network in service configurations. + +**Example:** +```toml +[[network]] +name = "backend-network" +``` + +### `driver` (optional) + +The network driver to use. + +**Default:** `bridge` + +**Valid values:** +- `bridge` - Standard bridge network (default) +- `host` - Use the host's networking directly +- `overlay` - Multi-host network for Swarm +- `macvlan` - Assign a MAC address to containers +- `none` - Disable networking + +**Example:** +```toml +[[network]] +name = "my-network" +driver = "overlay" +``` + +### `external` (optional) + +If `true`, Dispenser will not create the network but expects it to already exist. This is useful for networks created outside of Dispenser. + +**Default:** `false` + +**Example:** +```toml +[[network]] +name = "existing-network" +external = true +``` + +### `internal` (optional) + +If `true`, restricts external access to the network. Containers on the network can communicate with each other but cannot access external networks or the internet. + +**Default:** `false` + +**Example:** +```toml +[[network]] +name = "isolated-backend" +driver = "bridge" +internal = true +``` + +### `attachable` (optional) + +If `true`, allows standalone containers to attach to the network. This is particularly useful for overlay networks in Swarm mode. + +**Default:** `true` + +**Example:** +```toml +[[network]] +name = "swarm-network" +driver = "overlay" +attachable = true +``` + +### `labels` (optional) + +Key-value pairs to add as metadata labels to the network. These labels can be used for organization and filtering. + +**Default:** Empty + +**Example:** +```toml +[[network]] +name = "app-network" + +[network.labels] +app = "myapp" +environment = "production" +team = "backend" +version = "2.0" +``` + +## Using Networks in Services + +After declaring networks in `dispenser.toml`, reference them in your service configurations. + +### Single Network + +```toml +# my-app/service.toml +[service] +name = "my-app" +image = "ghcr.io/my-org/my-app:latest" + +[[network]] +name = "app-network" + +[dispenser] +watch = true +``` + +### Multiple Networks + +A service can connect to multiple networks: + +```toml +# api/service.toml +[service] +name = "api" +image = "my-api:latest" + +[[network]] +name = "frontend-network" + +[[network]] +name = "backend-network" + +[dispenser] +watch = true +``` + +## Complete Example + +### dispenser.toml + +```toml +delay = 60 + +# Public-facing network +[[network]] +name = "frontend" +driver = "bridge" + +[network.labels] +tier = "frontend" + +# Internal backend network +[[network]] +name = "backend" +driver = "bridge" +internal = true + +[network.labels] +tier = "backend" + +# Database network (isolated) +[[network]] +name = "database" +driver = "bridge" +internal = true + +[network.labels] +tier = "database" + +[[service]] +path = "nginx" + +[[service]] +path = "api" + +[[service]] +path = "worker" + +[[service]] +path = "postgres" +``` + +### nginx/service.toml + +```toml +[service] +name = "nginx" +image = "nginx:latest" + +[[port]] +host = 80 +container = 80 + +[[port]] +host = 443 +container = 443 + +# Only connected to frontend network +[[network]] +name = "frontend" + +[dispenser] +watch = true +initialize = "immediately" +``` + +### api/service.toml + +```toml +[service] +name = "api" +image = "my-api:latest" + +# Connected to both frontend and backend +[[network]] +name = "frontend" + +[[network]] +name = "backend" + +[env] +DATABASE_URL = "postgres://postgres:5432/mydb" + +[dispenser] +watch = true +initialize = "immediately" +``` + +### worker/service.toml + +```toml +[service] +name = "worker" +image = "my-worker:latest" + +# Connected to backend and database +[[network]] +name = "backend" + +[[network]] +name = "database" + +[env] +DATABASE_URL = "postgres://postgres:5432/mydb" + +restart = "always" + +[dispenser] +watch = true +initialize = "immediately" +``` + +### postgres/service.toml + +```toml +[service] +name = "postgres" +image = "postgres:15" + +# Only connected to database network (most isolated) +[[network]] +name = "database" + +[env] +POSTGRES_PASSWORD = "secretpassword" +POSTGRES_DB = "mydb" + +[[volume]] +source = "./data" +target = "/var/lib/postgresql/data" + +restart = "unless-stopped" + +[dispenser] +watch = false +initialize = "immediately" +``` + +## Network Communication + +### Service Discovery + +Services on the same network can communicate using their service names as hostnames. Docker provides built-in DNS resolution. + +**Example:** +```toml +# api/service.toml +[service] +name = "api" +image = "my-api:latest" + +[[network]] +name = "app-network" + +[env] +# Reference the database by service name +DATABASE_URL = "postgres://postgres:5432/mydb" +``` + +```toml +# postgres/service.toml +[service] +name = "postgres" # This becomes the hostname +image = "postgres:15" + +[[network]] +name = "app-network" + +[env] +POSTGRES_DB = "mydb" +``` + +### Network Isolation + +Use internal networks to isolate sensitive services: + +```toml +# dispenser.toml +[[network]] +name = "public" +driver = "bridge" +internal = false # Can access internet + +[[network]] +name = "private" +driver = "bridge" +internal = true # Cannot access internet +``` + +## External Networks + +To use a network created outside of Dispenser (e.g., manually created with `docker network create`): + +```toml +[[network]] +name = "existing-network" +external = true +``` + +When `external = true`, Dispenser will not attempt to create or delete the network. It must already exist before starting services that reference it. + +## Troubleshooting + +### Network Already Exists + +If you see an error that a network already exists, either: +1. Mark it as `external = true` in your configuration +2. Remove the existing network with `docker network rm ` + +### Services Cannot Communicate + +Ensure that: +1. Both services are connected to the same network +2. You're using the correct service name as the hostname +3. The network is not marked as `internal` if internet access is needed +4. Firewall rules are not blocking traffic + +### Viewing Networks + +```sh +# List all networks +docker network ls + +# Inspect a specific network +docker network inspect app-network + +# See which containers are connected +docker network inspect app-network --format '{{range .Containers}}{{.Name}} {{end}}' +``` + +## Best Practices + +1. **Use multiple networks** for security isolation (frontend, backend, database tiers) +2. **Mark sensitive networks as internal** to prevent external access +3. **Use descriptive network names** that indicate their purpose +4. **Add labels** to networks for better organization and documentation +5. **Use the bridge driver** for single-host deployments (most common) +6. **Test connectivity** between services after configuration changes diff --git a/README.md b/README.md index 30d6dfd..56b566d 100644 --- a/README.md +++ b/README.md @@ -1,19 +1,22 @@ # Dispenser -This tool manages applications defined in Docker Compose by continuously -monitoring your artifact registry for new versions of Docker images. When -updates are detected, dispenser automatically deploys the new versions of your -services with zero downtime, updating the running containers on the host -machine. +This tool manages containerized applications by continuously monitoring your artifact registry for new versions of Docker images. When updates are detected, dispenser automatically deploys the new versions of your services with zero downtime, updating the running containers on the host machine. -dispenser operates as a daemon that runs in the background on the host server -that watches your artifact registry, detecting when new versions of your -container images are published. +dispenser operates as a daemon that runs in the background on the host server that watches your artifact registry, detecting when new versions of your container images are published. + +## Documentation + +- **[CLI Reference](CLI.md)** - Complete command-line options and usage +- **[Service Configuration](SERVICE_CONFIG.md)** - Detailed `service.toml` reference +- **[Network Configuration](NETWORKS.md)** - Docker network setup guide +- **[Cron Scheduling](CRON.md)** - Scheduled deployments +- **[GCP Secrets](GCP.md)** - Google Secret Manager integration +- **[Migration Guide](MIGRATION_GUIDE.md)** - Migrating from Docker Compose ## Prerequisites Before installing Dispenser, ensure the following are installed on your server: -- **Docker Engine and Docker Compose**: Dispenser orchestrates Docker Compose deployments. +- **Docker Engine**: Dispenser orchestrates Docker container deployments. - [Install Docker Engine on Debian](https://docs.docker.com/engine/install/debian/) - [Install Docker Engine on Ubuntu](https://docs.docker.com/engine/install/ubuntu/) - [Install Docker Engine on RHEL/CentOS](https://docs.docker.com/engine/install/rhel/) @@ -27,9 +30,9 @@ Download the latest `.deb` or `.rpm` package from the [releases page](https://gi ```sh # Download the .deb package -# wget https://github.com/ixpantia/dispenser/releases/download/v0.6.0/dispenser-0.6-0.x86_64.deb +# wget https://github.com/ixpantia/dispenser/releases/download/v0.7.0/dispenser-0.7-0.x86_64.deb -sudo apt install ./dispenser-0.6-0.x86_64.deb +sudo apt install ./dispenser-0.7-0.x86_64.deb ``` ### RHEL / CentOS / Fedora @@ -38,7 +41,7 @@ sudo apt install ./dispenser-0.6-0.x86_64.deb # Download the .rpm package # wget ... -sudo dnf install ./dispenser-0.6-0.x86_64.rpm +sudo dnf install ./dispenser-0.7-0.x86_64.rpm ``` The installation process will: @@ -77,7 +80,7 @@ Docker will securely store the credentials in the `dispenser` user's home direct ### Step 3: Prepare Your Application Directory -Dispenser deploys applications based on a `docker-compose.yaml` file. +Dispenser deploys applications based on a `service.toml` file. 1. Create a directory for your application inside `/opt/dispenser`. Let's call it `my-app`. @@ -86,36 +89,45 @@ Dispenser deploys applications based on a `docker-compose.yaml` file. cd my-app ``` -2. Create a `docker-compose.yaml` file that defines your service. +2. Create a `service.toml` file that defines your service. ```sh - vim docker-compose.yaml + vim service.toml ``` - Paste your service definition. Note that the image points to the `:latest` tag, which Dispenser will monitor. - - ```yaml - services: - my-app: - image: ghcr.io/my-org/my-app:latest - ports: - - "8080:80" - env_file: .env - ``` + Paste your service definition. Here's a basic example: -3. (Optional) Create an `.env` file for your application's environment variables. - - ```sh - vim .env - ``` - ``` - DATABASE_URL=postgres://user:password@host:port/db - API_KEY=your_secret_api_key + ```toml + # Service metadata (required) + [service] + name = "my-app" + image = "ghcr.io/my-org/my-app:latest" + + # Port mappings (optional) + [[port]] + host = 8080 + container = 80 + + # Environment variables (optional) + [env] + DATABASE_URL = "postgres://user:password@host:port/db" + API_KEY = "your_secret_api_key" + + # Restart policy (optional, defaults to "no") + restart = "always" + + # Dispenser-specific configuration (required) + [dispenser] + # Watch for image updates + watch = true + + # Initialize immediately on startup + initialize = "immediately" ``` -### Step 4: Configure Dispenser to Watch Your Image +### Step 4: Configure Dispenser to Monitor Your Service -Now, tell Dispenser to monitor your image for updates. +Now, tell Dispenser about your service so it can monitor it for updates. 1. Return to the `dispenser` home directory and edit the configuration file. @@ -124,23 +136,22 @@ Now, tell Dispenser to monitor your image for updates. vim dispenser.toml ``` -2. Add an `[[instance]]` block to the file. This tells Dispenser where your application is and which image to watch. +2. Add a `[[service]]` block to the file. This tells Dispenser where your application is located. ```toml # How often to check for new images, in seconds. delay = 60 - [[instance]] + [[service]] # Path is relative to /opt/dispenser path = "my-app" - images = [{ registry = "ghcr.io", name = "my-org/my-app", tag = "latest" }] ``` Dispenser also supports scheduled deployments using `cron` expressions. For more details on configuring periodic restarts, see the [cron documentation](CRON.md). ### Step 5: Service Initialization (Optional) -By default, Dispenser starts services as soon as the application launches. However, you can control this behavior using the `initialize` option in your `dispenser.toml` file. This is particularly useful for services that should only run on a specific schedule. +By default, Dispenser starts services as soon as the application launches. However, you can control this behavior using the `initialize` option in your service's `service.toml` file. This is particularly useful for services that should only run on a specific schedule. The `initialize` option can be set to one of two values: @@ -149,34 +160,52 @@ The `initialize` option can be set to one of two values: #### Example: Immediate Initialization -This is the default behavior. The following configuration will start the `my-app` service immediately. +This is the default behavior. The following configuration will start the service immediately. ```toml -[[instance]] -path = "my-app" -images = [{ registry = "ghcr.io", name = "my-org/my-app", tag = "latest" }] -# initialize = "immediately" # This line is optional +# my-app/service.toml +[service] +name = "my-app" +image = "ghcr.io/my-org/my-app:latest" + +[[port]] +host = 8080 +container = 80 + +[dispenser] +watch = true +initialize = "immediately" # This is the default ``` #### Example: Initialization on Trigger -This configuration is useful for scheduled tasks. The `backup-service` will not start immediately. Instead, it will be triggered to run based on the cron schedule. +This configuration is useful for scheduled tasks. The service will not start immediately. Instead, it will be triggered to run based on the cron schedule. ```toml -[[instance]] -path = "backup-service" -cron = "0 3 * * *" # Run every day at 3 AM +# backup-service/service.toml +[service] +name = "backup-job" +image = "ghcr.io/my-org/backup:latest" + +[[volume]] +source = "./backups" +target = "/backups" + +[dispenser] +watch = false initialize = "on-trigger" +cron = "0 3 * * *" # Run every day at 3 AM ``` + In this example, the service defined in the `backup-service` directory will only be started when the cron schedule is met. After its first run, it will continue to be managed by its cron schedule. ### Step 6: Using Variables (Optional) -Dispenser supports using variables in your configuration file via `dispenser.vars` or any file ending in `.dispenser.vars`. These files allow you to define values that can be reused inside `dispenser.toml` using `${VARIABLE}` syntax. +Dispenser supports using variables in your configuration files via `dispenser.vars` or any file ending in `.dispenser.vars`. These files allow you to define values that can be reused inside `dispenser.toml` and `service.toml` files using `${VARIABLE}` syntax. -**Note:** While Dispenser uses the `${}` syntax similar to Docker Compose, it does not support all [Docker Compose interpolation features](https://docs.docker.com/compose/how-tos/environment-variables/variable-interpolation/) (such as default values `:-` or error messages `:?`) within `dispenser.toml`. +**Note:** While Dispenser uses the `${}` syntax similar to Docker Compose, it does not support all [Docker Compose interpolation features](https://docs.docker.com/compose/how-tos/environment-variables/variable-interpolation/) (such as default values `:-` or error messages `:?`). -However, variables defined in these variable files are passed as environment variables to the underlying `docker compose` commands. This allows you to use them in your `docker-compose.yaml` files, where full Docker Compose interpolation is supported. +Variables defined in these files are substituted directly into your configuration files during loading. This is useful for reusing the same configuration in multiple deployments. @@ -191,6 +220,7 @@ This is useful for reusing the same configuration in multiple deployments. ```toml registry_url = "ghcr.io" app_version = "latest" + org_name = "my-org" ``` Dispenser also supports fetching secrets from Google Secret Manager. For more details on configuring secrets, see the [GCP secrets documentation](GCP.md). @@ -198,23 +228,138 @@ This is useful for reusing the same configuration in multiple deployments. 3. Use these variables in your `dispenser.toml`. ```toml - [[instance]] + delay = 60 + + [[service]] path = "my-app" - images = [{ registry = "${registry_url}", name = "my-org/my-app", tag = "${app_version}" }] ``` -4. Use these variables in your `docker-compose.yaml`. +4. Use these variables in your `service.toml`. + + ```toml + [service] + name = "my-app" + image = "${registry_url}/${org_name}/my-app:${app_version}" + + [[port]] + host = 8080 + container = 80 + + [dispenser] + watch = true + initialize = "immediately" + ``` + +### Step 7: Working with Networks (Optional) + +Dispenser supports Docker networks to enable communication between services. Networks are declared in `dispenser.toml` and referenced in individual service configurations. + +1. Declare networks in your `dispenser.toml`. + + ```toml + delay = 60 + + # Network declarations + [[network]] + name = "app-network" + driver = "bridge" + + [[service]] + path = "my-app" + + [[service]] + path = "my-database" + ``` + +2. Reference networks in your service configurations. + + ```toml + # my-app/service.toml + [service] + name = "my-app" + image = "ghcr.io/my-org/my-app:latest" + + [[port]] + host = 8080 + container = 80 + + [[network]] + name = "app-network" + + [dispenser] + watch = true + initialize = "immediately" + ``` - ```yaml - services: - my-app: - # You can use the variables defined in dispenser.vars here - image: ${registry_url}/my-org/my-app:${app_version} - ports: - - "8080:80" + ```toml + # my-database/service.toml + [service] + name = "postgres-db" + image = "postgres:15" + + [env] + POSTGRES_PASSWORD = "secretpassword" + + [[network]] + name = "app-network" + + [dispenser] + watch = false + initialize = "immediately" ``` -### Step 7: Validating Configuration +Now both services can communicate with each other using their service names as hostnames. + +For advanced network configuration including external networks, internal networks, labels, and different drivers, see the [Network Configuration Guide](NETWORKS.md). + +### Step 8: Advanced Service Configuration + +The `service.toml` format supports many advanced features. For a complete reference of all available configuration options, see the [Service Configuration Reference](SERVICE_CONFIG.md). + +#### Volume Mounts + +```toml +[[volume]] +source = "./data" +target = "/app/data" + +[[volume]] +source = "./config" +target = "/app/config" +readonly = true +``` + +#### Custom Commands and Working Directory + +```toml +[service] +name = "worker" +image = "python:3.11" +command = ["python", "worker.py", "--verbose"] +working_dir = "/app" +``` + +#### Resource Limits + +```toml +[service] +name = "my-app" +image = "my-app:latest" +memory = "512m" +cpus = "1.0" +``` + +#### User and Hostname + +```toml +[service] +name = "my-app" +image = "my-app:latest" +user = "1000:1000" +hostname = "myapp-container" +``` + +### Step 9: Validating Configuration Before applying changes, you can validate your configuration files (including variable substitution) to ensure there are no syntax errors or missing variables. @@ -229,21 +374,23 @@ If the configuration is valid, it will output: Dispenser config is ok. ``` -If there's an error `dispenser` will show you a detailed error message. +For more command-line options, see the [CLI Reference](CLI.md). + +If there's an error, `dispenser` will show you a detailed error message. ``` ---------------------------------- ----------------------------------- 2 | - 3 | [[instance]] - 4 | path = "nginx" - 5 > images = [{ registry = "${missing}", name = "nginx", tag = "latest" }] - i ^^^^^^^^^^ undefined value + 3 | [service] + 4 | name = "nginx" + 5 > image = "${missing}/nginx:latest" + i ^^^^^^^^^^ undefined value ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ No referenced variables ------------------------------------------------------------------------------- ``` -### Step 8: Start and Verify the Deployment +### Step 10: Start and Verify the Deployment 1. Exit the `dispenser` user session to return to your regular user. ```sh @@ -267,7 +414,7 @@ No referenced variables ``` You should see a container running with the `ghcr.io/my-org/my-app:latest` image. -From now on, whenever you push a new image to your registry with the `latest` tag, Dispenser will automatically detect it, pull the new version, and redeploy your service with zero downtime. +From now on, whenever you push a new image to your registry with the `latest` tag (and `watch = true` is set in the service configuration), Dispenser will automatically detect it, pull the new version, and redeploy your service with zero downtime. ### Managing the Service with CLI Signals @@ -275,6 +422,8 @@ Dispenser includes a built-in mechanism to send signals to the running daemon us **Note:** This command relies on the `dispenser.pid` file, so you should run it from the same directory where Dispenser is running (typically `/opt/dispenser` for the default installation). +For complete CLI documentation including all available flags, see the [CLI Reference](CLI.md). + **Reload Configuration:** To reload the `dispenser.toml` configuration without restarting the process: @@ -283,7 +432,7 @@ To reload the `dispenser.toml` configuration without restarting the process: dispenser -s reload ``` -This is useful for adding new instances or changing configuration parameters without interrupting currently monitored services. +This is useful for adding new services or changing configuration parameters without interrupting currently monitored services. **Stop Service:** @@ -293,6 +442,15 @@ To gracefully stop the Dispenser daemon: dispenser -s stop ``` +## Additional Resources + +- **[CLI Reference](CLI.md)** - All command-line flags and options +- **[Service Configuration Reference](SERVICE_CONFIG.md)** - Complete field documentation +- **[Network Configuration Guide](NETWORKS.md)** - Advanced networking setup +- **[Cron Documentation](CRON.md)** - Scheduled deployments +- **[GCP Secrets Integration](GCP.md)** - Using Google Secret Manager +- **[Migration Guide](MIGRATION_GUIDE.md)** - Migrating from Docker Compose format + ## Building from Source ### RPM (RHEL) diff --git a/SERVICE_CONFIG.md b/SERVICE_CONFIG.md new file mode 100644 index 0000000..c360f41 --- /dev/null +++ b/SERVICE_CONFIG.md @@ -0,0 +1,591 @@ +# Service Configuration Reference + +This document describes all available configuration options for service definitions in `service.toml` files. + +## Overview + +Each service in Dispenser is configured using a `service.toml` file located in its own directory. This file defines how the Docker container should be created, what resources it should use, and how Dispenser should manage it. + +## File Structure + +A `service.toml` file has the following main sections: + +```toml +[service] +# Core service configuration + +[[port]] +# Port mapping (can have multiple) + +[[volume]] +# Volume mount (can have multiple) + +[env] +# Environment variables + +[[network]] +# Network connection (can have multiple) + +restart = "policy" + +[dispenser] +# Dispenser-specific settings + +[depends_on] +# Service dependencies +``` + +## Service Section + +The `[service]` section defines the core container configuration. + +### `name` (required) + +The name of the container. Must be unique across all services. + +```toml +[service] +name = "my-app" +``` + +### `image` (required) + +The Docker image to use. Supports variable interpolation. + +```toml +[service] +image = "nginx:latest" + +# With registry and variables +image = "${registry_url}/my-org/my-app:${version}" +``` + +### `command` (optional) + +Override the default command. Can be a string or array of strings. + +```toml +[service] +name = "worker" +image = "python:3.11" + +# Array format (recommended) +command = ["python", "worker.py", "--verbose"] + +# String format +# command = "python worker.py --verbose" +``` + +### `entrypoint` (optional) + +Override the default entrypoint. Can be a string or array of strings. + +```toml +[service] +name = "custom-app" +image = "my-app:latest" + +# Array format (recommended) +entrypoint = ["/bin/sh", "-c"] +command = ["echo hello && sleep 10"] + +# String format +# entrypoint = "/bin/sh -c" +``` + +### `working_dir` (optional) + +Set the working directory inside the container. + +```toml +[service] +name = "app" +image = "node:18" +working_dir = "/app" +command = ["npm", "start"] +``` + +### `user` (optional) + +Run the container as a specific user. Can be a username, UID, or UID:GID. + +```toml +[service] +name = "app" +image = "my-app:latest" + +# Run as specific UID +user = "1000" + +# Run as specific UID:GID +user = "1000:1000" + +# Run as named user +user = "appuser" +``` + +### `hostname` (optional) + +Set the container's hostname. + +```toml +[service] +name = "api" +image = "my-api:latest" +hostname = "api-server" +``` + +### `memory` (optional) + +Set a memory limit for the container. Supports suffixes: `b`, `k`/`kb`, `m`/`mb`, `g`/`gb`. + +```toml +[service] +name = "app" +image = "my-app:latest" + +# 512 megabytes +memory = "512m" + +# 2 gigabytes +memory = "2g" + +# 256 megabytes +memory = "256mb" +``` + +### `cpus` (optional) + +Set CPU limit for the container. Decimal values allowed. + +```toml +[service] +name = "app" +image = "my-app:latest" + +# Half a CPU +cpus = "0.5" + +# Two CPUs +cpus = "2" + +# One and a half CPUs +cpus = "1.5" +``` + +## Port Mappings + +Map ports from the host to the container. Use `[[port]]` for each mapping. + +```toml +[[port]] +host = 8080 +container = 80 + +[[port]] +host = 8443 +container = 443 +``` + +### `host` (required) + +The port on the host machine. + +### `container` (required) + +The port inside the container. + +## Volume Mounts + +Mount directories or files into the container. Use `[[volume]]` for each mount. + +```toml +[[volume]] +source = "./data" +target = "/app/data" + +[[volume]] +source = "./config" +target = "/app/config" +readonly = true +``` + +### `source` (required) + +The source path on the host. Can be: +- Relative path (relative to the service directory) +- Absolute path +- Named volume + +### `target` (required) + +The target path inside the container. Must be an absolute path. + +### `readonly` (optional) + +If `true`, the volume is mounted as read-only. + +**Default:** `false` + +```toml +[[volume]] +source = "./config" +target = "/etc/app/config" +readonly = true +``` + +## Environment Variables + +Define environment variables for the container. + +```toml +[env] +NODE_ENV = "production" +DATABASE_URL = "postgres://user:pass@host:5432/db" +API_KEY = "${api_key}" +LOG_LEVEL = "info" +``` + +Variables support interpolation using `${variable}` syntax with values from `dispenser.vars` or `*.dispenser.vars` files. + +## Network Connections + +Connect the service to Docker networks. Networks must be declared in `dispenser.toml` first. + +```toml +[[network]] +name = "app-network" + +[[network]] +name = "database-network" +``` + +### `name` (required) + +The name of the network to connect to. Must match a network declared in `dispenser.toml`. + +## Restart Policy + +Define when Docker should restart the container. + +```toml +restart = "always" +``` + +**Valid values:** +- `no` or `never` - Never restart (default) +- `always` - Always restart if stopped +- `on-failure` - Restart only if container exits with non-zero status +- `unless-stopped` - Always restart unless explicitly stopped + +**Default:** `no` + +**Examples:** + +```toml +# Never restart +restart = "no" + +# Always restart (for long-running services) +restart = "always" + +# Restart on failure (for critical services) +restart = "on-failure" + +# Restart unless stopped (for persistent services) +restart = "unless-stopped" +``` + +## Dispenser Section + +The `[dispenser]` section controls how Dispenser manages the service. + +### `watch` (required) + +Whether to watch the image registry for updates. When `true`, Dispenser will poll the registry and automatically redeploy when a new version is detected. + +```toml +[dispenser] +watch = true +``` + +### `initialize` (optional) + +Controls when the service should be started. + +**Valid values:** +- `immediately` - Start as soon as Dispenser starts (default) +- `on-trigger` - Start only when triggered (by cron or image update) + +**Default:** `immediately` + +```toml +[dispenser] +watch = true +initialize = "immediately" +``` + +```toml +# For scheduled tasks +[dispenser] +watch = false +initialize = "on-trigger" +cron = "0 3 * * *" +``` + +### `cron` (optional) + +A cron expression for scheduled deployments. When specified, the service will be redeployed according to the schedule. + +```toml +[dispenser] +watch = false +initialize = "on-trigger" +cron = "0 3 * * *" # Every day at 3 AM +``` + +See [CRON.md](CRON.md) for more details on cron scheduling. + +## Service Dependencies + +The `[depends_on]` section defines dependencies between services. + +```toml +[depends_on] +postgres = "service-started" +redis = "service-started" +migration = "service-completed" +``` + +**Valid conditions:** +- `service-started` or `started` - Wait for service to start +- `service-completed` or `completed` - Wait for service to complete + +## Complete Examples + +### Basic Web Application + +```toml +[service] +name = "nginx" +image = "nginx:latest" + +[[port]] +host = 80 +container = 80 + +[[port]] +host = 443 +container = 443 + +[[volume]] +source = "./html" +target = "/usr/share/nginx/html" +readonly = true + +[[volume]] +source = "./nginx.conf" +target = "/etc/nginx/nginx.conf" +readonly = true + +[[network]] +name = "web" + +restart = "unless-stopped" + +[dispenser] +watch = true +initialize = "immediately" +``` + +### API Service with Database + +```toml +[service] +name = "api" +image = "ghcr.io/my-org/api:latest" +memory = "1g" +cpus = "1.0" + +[[port]] +host = 3000 +container = 3000 + +[env] +NODE_ENV = "production" +DATABASE_URL = "postgres://postgres:5432/mydb" +REDIS_URL = "redis://redis:6379" +LOG_LEVEL = "info" + +[[network]] +name = "frontend" + +[[network]] +name = "backend" + +restart = "always" + +[dispenser] +watch = true +initialize = "immediately" + +[depends_on] +postgres = "service-started" +redis = "service-started" +``` + +### Background Worker + +```toml +[service] +name = "worker" +image = "python:3.11" +command = ["python", "worker.py"] +working_dir = "/app" +user = "1000:1000" +memory = "512m" +cpus = "0.5" + +[[volume]] +source = "./src" +target = "/app" + +[[volume]] +source = "./logs" +target = "/app/logs" + +[env] +PYTHONUNBUFFERED = "1" +DATABASE_URL = "postgres://postgres:5432/mydb" +QUEUE_URL = "redis://redis:6379" + +[[network]] +name = "backend" + +restart = "always" + +[dispenser] +watch = true +initialize = "immediately" + +[depends_on] +postgres = "service-started" +redis = "service-started" +``` + +### Scheduled Backup Job + +```toml +[service] +name = "backup-job" +image = "my-backup:latest" +command = ["/backup.sh"] +working_dir = "/backups" + +[[volume]] +source = "./backups" +target = "/backups" + +[[volume]] +source = "/var/lib/docker/volumes" +target = "/source" +readonly = true + +[env] +BACKUP_RETENTION_DAYS = "30" +BACKUP_DESTINATION = "s3://my-bucket/backups" + +restart = "no" + +[dispenser] +watch = false +initialize = "on-trigger" +cron = "0 2 * * *" # Every day at 2 AM +``` + +### Database Service + +```toml +[service] +name = "postgres" +image = "postgres:15" +hostname = "postgres-db" +memory = "2g" + +[env] +POSTGRES_PASSWORD = "secretpassword" +POSTGRES_USER = "myapp" +POSTGRES_DB = "myapp" +PGDATA = "/var/lib/postgresql/data/pgdata" + +[[volume]] +source = "./data" +target = "/var/lib/postgresql/data" + +[[network]] +name = "database" + +restart = "unless-stopped" + +[dispenser] +watch = false +initialize = "immediately" +``` + +### Custom Entrypoint Example + +```toml +[service] +name = "init-service" +image = "alpine:latest" +entrypoint = ["/bin/sh", "-c"] +command = ["apk add --no-cache curl && curl https://example.com/setup.sh | sh"] +working_dir = "/workspace" + +[[volume]] +source = "./workspace" +target = "/workspace" + +restart = "no" + +[dispenser] +watch = false +initialize = "immediately" +``` + +## Validation + +Before applying your configuration, validate it with: + +```sh +dispenser --test +``` + +This will check for: +- Syntax errors +- Missing required fields +- Undefined variables +- Network references to non-existent networks + +## Best Practices + +1. **Use meaningful service names** that describe the service's purpose +2. **Pin image versions** in production instead of using `latest` +3. **Set resource limits** (`memory`, `cpus`) to prevent resource exhaustion +4. **Use readonly volumes** for configuration files +5. **Use restart policies** appropriate for the service type: + - `always` for critical services + - `on-failure` for services that should recover from crashes + - `no` for one-time jobs +6. **Use environment variables** for configuration instead of hardcoding +7. **Connect to appropriate networks** based on security requirements +8. **Define dependencies** when services rely on each other +9. **Use `initialize = "on-trigger"`** for scheduled or batch jobs +10. **Test configuration changes** with `dispenser --test` before deployment + +## See Also + +- [CLI Reference](CLI.md) - Command-line options +- [Network Configuration](NETWORKS.md) - Detailed network setup +- [CRON Documentation](CRON.md) - Scheduling reference +- [Migration Guide](MIGRATION_GUIDE.md) - Migrating from Docker Compose diff --git a/deb/DEBIAN/control b/deb/DEBIAN/control index 2ed8d32..86027ef 100644 --- a/deb/DEBIAN/control +++ b/deb/DEBIAN/control @@ -1,5 +1,5 @@ Package: dispenser -Version: 0.6 +Version: 0.7 Maintainer: ixpantia S.A. Architecture: amd64 Description: Continously Deploy services with Docker Compose diff --git a/example/.gitignore b/example-new/.gitignore similarity index 100% rename from example/.gitignore rename to example-new/.gitignore diff --git a/example-new/dispenser.toml b/example-new/dispenser.toml new file mode 100644 index 0000000..c9286e0 --- /dev/null +++ b/example-new/dispenser.toml @@ -0,0 +1,12 @@ +# Delay in seconds between polling for new images (default: 60) +delay = 60 + +[[network]] +name = "dispenser-net" +driver = "bridge" + +[[service]] +path = "nginx" + +[[service]] +path = "hello-world" diff --git a/example-new/dispenser.vars b/example-new/dispenser.vars new file mode 100644 index 0000000..7efb9ac --- /dev/null +++ b/example-new/dispenser.vars @@ -0,0 +1,2 @@ +docker_io="docker.io" +nginx_port="8080" diff --git a/example-new/hello-world/service.toml b/example-new/hello-world/service.toml new file mode 100644 index 0000000..c27e61f --- /dev/null +++ b/example-new/hello-world/service.toml @@ -0,0 +1,26 @@ +# Service configuration for hello-world + +# Service metadata (required) +[service] +name = "hello-world-job" +image = "hello-world" +# Optional: Resource limits +memory = "128m" # Memory limit (e.g., "128m", "256m", "1g") +cpus = "0.5" # CPU limit (e.g., "0.5", "1.0", "2.0") + +# Restart policy (optional, defaults to "No") +restart = "no" + +[[network]] +name = "dispenser-net" + +# Dispenser-specific configuration (required) +[dispenser] +# Don't watch for image updates +watch = false + +# Initialize only when triggered (by cron in this case) +initialize = "on-trigger" + +# Run every 10 seconds +cron = "*/10 * * * * *" diff --git a/example-new/nginx/html/index.html b/example-new/nginx/html/index.html new file mode 100644 index 0000000..2bf5624 --- /dev/null +++ b/example-new/nginx/html/index.html @@ -0,0 +1 @@ +

Welcome to Dispenser

diff --git a/example-new/nginx/service.toml b/example-new/nginx/service.toml new file mode 100644 index 0000000..f0b761d --- /dev/null +++ b/example-new/nginx/service.toml @@ -0,0 +1,33 @@ +# Service configuration for nginx + +# Service metadata (required) +[service] +name = "nginx-service" +image = "${docker_io}/nginx:latest" +# Optional: Resource limits +memory = "256m" # Memory limit (e.g., "512m", "1g", "2g") +cpus = "1.0" # CPU limit (e.g., "0.5", "1.0", "2.0") + +# Port mappings (optional) +[[port]] +host = 8080 +container = 80 + +[[volume]] +source = "./html" +target = "/usr/share/nginx/html" + +[[network]] +name = "dispenser-net" + + +# Restart policy (optional, defaults to "No") +restart = "always" + +# Dispenser-specific configuration (required) +[dispenser] +# Watch for image updates +watch = true + +# Initialize immediately on startup (default behavior) +initialize = "immediately" diff --git a/example-old/.gitignore b/example-old/.gitignore new file mode 100644 index 0000000..6702b47 --- /dev/null +++ b/example-old/.gitignore @@ -0,0 +1 @@ +dispenser.pid diff --git a/example/dispenser.toml b/example-old/dispenser.toml similarity index 100% rename from example/dispenser.toml rename to example-old/dispenser.toml diff --git a/example/dispenser.vars b/example-old/dispenser.vars similarity index 100% rename from example/dispenser.vars rename to example-old/dispenser.vars diff --git a/example/hello-world/docker-compose.yaml b/example-old/hello-world/docker-compose.yaml similarity index 100% rename from example/hello-world/docker-compose.yaml rename to example-old/hello-world/docker-compose.yaml diff --git a/example/nginx/docker-compose.yaml b/example-old/nginx/docker-compose.yaml similarity index 100% rename from example/nginx/docker-compose.yaml rename to example-old/nginx/docker-compose.yaml diff --git a/justfile b/justfile index cd80ace..e697a45 100644 --- a/justfile +++ b/justfile @@ -1,6 +1,6 @@ # justfile for dispenser project -DISPENSER_VERSION := "0.6" +DISPENSER_VERSION := "0.7" TARGET_BIN := "target/x86_64-unknown-linux-musl/release/dispenser" USR_BIN_DEB := "deb/usr/local/bin/dispenser" USR_BIN_RPM := "rpm/usr/local/bin/dispenser" diff --git a/rpm/dispenser.spec b/rpm/dispenser.spec index 9217861..0bdc587 100644 --- a/rpm/dispenser.spec +++ b/rpm/dispenser.spec @@ -1,5 +1,5 @@ Name: dispenser -Version: 0.6 +Version: 0.7 Release: 0 Summary: Continously Deploy services with Docker Compose License: see /usr/share/doc/dispenser/copyright diff --git a/src/config.rs b/src/config.rs deleted file mode 100644 index 94fab8a..0000000 --- a/src/config.rs +++ /dev/null @@ -1,92 +0,0 @@ -use futures_util::future; -use serde::Serialize; - -use std::{collections::HashMap, num::NonZeroU64, path::PathBuf, sync::Arc}; -use tokio::sync::Mutex; - -use cron::Schedule; - -use crate::{ - instance::{Instance, Instances}, - manifests::DockerWatcher, -}; - -#[derive(Debug, Default, PartialEq, Eq)] -pub struct DispenserVars { - pub inner: Arc>, -} - -impl Clone for DispenserVars { - fn clone(&self) -> Self { - let inner = Arc::clone(&self.inner); - Self { inner } - } -} - -impl Serialize for DispenserVars { - fn serialize(&self, serializer: S) -> Result - where - S: serde::Serializer, - { - self.inner.serialize(serializer) - } -} - -pub struct ContposeConfig { - pub delay: NonZeroU64, - pub instances: Vec, -} - -impl ContposeConfig { - pub async fn get_instances(&self) -> Instances { - let inner_futures = self - .instances - .iter() - .cloned() - .map(|instance| async { Arc::new(Mutex::new(Instance::new(instance).await)) }); - - let inner = future::join_all(inner_futures).await; - - let delay = std::time::Duration::from_secs(self.delay.get()); - Instances { inner, delay } - } -} - -/// Defines when a service should be initialized. -#[derive(Debug, Clone, Copy, PartialEq, Eq)] -pub enum Initialize { - /// The service is started as soon as the application starts. - Immediately, - /// The service is started only when a trigger occurs (e.g., a cron schedule or a detected image update). - OnTrigger, -} - -#[derive(Clone)] -pub struct ContposeInstanceConfig { - pub path: PathBuf, - pub images: Vec, - pub cron: Option, - /// Defines when the service should be initialized. - /// - /// - `Immediately` (default): The service is started as soon as the application starts. - /// - `OnTrigger`: The service is started only when a trigger occurs (e.g., a cron schedule or a detected image update). - pub initialize: Initialize, - pub vars: DispenserVars, -} - -#[derive(Clone)] -pub(crate) struct Image { - pub(crate) registry: String, - pub(crate) name: String, - pub(crate) tag: String, -} - -impl ContposeInstanceConfig { - pub async fn get_watchers(&self) -> Vec { - let initialize_futures = self - .images - .iter() - .map(|image| DockerWatcher::initialize(&image.registry, &image.name, &image.tag)); - future::join_all(initialize_futures).await - } -} diff --git a/src/config_file.rs b/src/config_file.rs deleted file mode 100644 index 917aeea..0000000 --- a/src/config_file.rs +++ /dev/null @@ -1,413 +0,0 @@ -use minijinja::Environment; -use serde::{Deserialize, Serialize}; - -use std::{ - collections::HashMap, - num::NonZeroU64, - path::{Path, PathBuf}, - sync::Arc, -}; - -use cron::Schedule; - -use crate::secrets; - -fn default_gcp_secret_version() -> String { - "latest".to_string() -} - -#[derive(Debug, serde::Deserialize, Clone)] -#[serde(tag = "source", rename_all = "snake_case")] -enum Secret { - Google { - name: String, - #[serde(default = "default_gcp_secret_version")] - version: String, - }, -} - -#[derive(Debug, serde::Deserialize, Clone)] -#[serde(untagged)] -enum DispenserVarEntry { - Raw(String), - Secret(Secret), -} - -#[derive(Debug, Default, Clone)] -pub struct DispenserVars { - inner: HashMap, -} - -impl<'de> Deserialize<'de> for DispenserVars { - fn deserialize(deserializer: D) -> Result - where - D: serde::Deserializer<'de>, - { - let inner = HashMap::deserialize(deserializer)?; - Ok(Self { inner }) - } -} - -impl DispenserVars { - async fn materialize(self) -> DispenserVarsMaterialized { - let mut inner = HashMap::new(); - for (key, entry) in self.inner { - let value = match entry { - DispenserVarEntry::Raw(s) => s, - DispenserVarEntry::Secret(secret) => match secret { - Secret::Google { name, version } => { - secrets::gcp::fetch_secret(&name, &version).await - } - }, - }; - inner.insert(key, value); - } - DispenserVarsMaterialized { inner } - } -} - -#[derive(Debug, Default, Clone)] -struct DispenserVarsMaterialized { - inner: HashMap, -} - -impl DispenserVarsMaterialized { - async fn try_init() -> Result { - let vars_raw = DispenserVars::try_init()?; - Ok(vars_raw.materialize().await) - } -} - -impl Serialize for DispenserVarsMaterialized { - fn serialize(&self, serializer: S) -> Result - where - S: serde::Serializer, - { - self.inner.serialize(serializer) - } -} - -/// Files that match dispenser.vars | *.dispenser.vars -/// Sorted -fn list_vars_files() -> Vec { - let mut files = Vec::new(); - let cli_args = crate::cli::get_cli_args(); - - let search_dir = cli_args.config.parent().map_or(Path::new("."), |p| { - if p.as_os_str().is_empty() { - Path::new(".") - } else { - p - } - }); - if let Ok(entries) = std::fs::read_dir(search_dir) { - for entry in entries.filter_map(|e| e.ok()) { - let path = entry.path(); - if path.is_file() { - if let Some(file_name) = path.file_name().and_then(|s| s.to_str()) { - if file_name == "dispenser.vars" || file_name.ends_with(".dispenser.vars") { - files.push(path); - } - } - } - } - } - - files.sort(); // Sort the paths alphabetically - files -} - -impl DispenserVars { - fn try_init_from_string(val: &str) -> Result { - Ok(toml::from_str(val)?) - } - fn combine(vars: Vec) -> Self { - let mut combined_inner = HashMap::new(); - vars.into_iter().for_each(|var_set| { - combined_inner.extend(var_set.inner); - }); - Self { - inner: combined_inner, - } - } - fn try_init() -> Result { - use std::io::Read; - let mut vars = Vec::new(); - let vars_files = list_vars_files(); - for vars_file in vars_files { - match std::fs::File::open(vars_file) { - Ok(mut file) => { - let mut this_vars = String::new(); - file.read_to_string(&mut this_vars)?; - match Self::try_init_from_string(&this_vars) { - Ok(this_vars) => vars.push(this_vars), - Err(e) => log::error!("Error parsing vars file: {e}"), - } - } - Err(e) => log::error!("Error reading vars file: {e}"), - } - } - - Ok(Self::combine(vars)) - } -} - -#[derive(Debug, serde::Deserialize)] -pub struct DispenserConfigFileSerde { - pub delay: NonZeroU64, - #[serde(default)] - pub instance: Vec, -} - -#[derive(Debug)] -pub struct DispenserConfigFile { - delay: NonZeroU64, - instance: Vec, - vars: DispenserVarsMaterialized, -} - -#[derive(Debug, thiserror::Error)] -pub enum DispenserConfigError { - #[error("IO error: {0}")] - Io(#[from] std::io::Error), - #[error("TOML error: {0}")] - Toml(#[from] toml::de::Error), - #[error("Templating error: {0:?}")] - Template(#[from] minijinja::Error), -} - -impl DispenserConfigFile { - fn try_init_from_string( - mut config: String, - vars: DispenserVarsMaterialized, - ) -> Result { - let mut env = Environment::new(); - - let syntax = minijinja::syntax::SyntaxConfig::builder() - .variable_delimiters("${", "}") - .build() - .expect("This really should not fail. If this fail something has gone horribly wrong."); - - env.set_syntax(syntax); - - env.set_undefined_behavior(minijinja::UndefinedBehavior::Strict); - let template = env.template_from_str(&config)?; - config = template.render(&vars)?; - - let config_toml: DispenserConfigFileSerde = toml::from_str(&config)?; - - Ok(DispenserConfigFile { - delay: config_toml.delay, - instance: config_toml.instance, - vars, - }) - } - pub async fn try_init() -> Result { - use std::io::Read; - let mut config = String::new(); - std::fs::File::open(&crate::cli::get_cli_args().config)?.read_to_string(&mut config)?; - // Use handle vars to replace strings with handlevars - let vars = DispenserVarsMaterialized::try_init().await?; - - Self::try_init_from_string(config, vars) - } -} - -/// Defines when a service should be initialized. -#[derive(Debug, Clone, Copy, PartialEq, Eq, serde::Deserialize, Default)] -pub enum Initialize { - /// The service is started as soon as the application starts. - #[serde(alias = "immediately", alias = "Immediately")] - #[default] - Immediately, - /// The service is started only when a trigger occurs (e.g., a cron schedule or a detected image update). - #[serde( - alias = "on-trigger", - alias = "OnTrigger", - alias = "on_trigger", - alias = "on trigger" - )] - OnTrigger, -} - -impl From for crate::config::Initialize { - fn from(value: Initialize) -> Self { - match value { - Initialize::Immediately => crate::config::Initialize::Immediately, - Initialize::OnTrigger => crate::config::Initialize::OnTrigger, - } - } -} - -#[derive(Debug, serde::Deserialize, Clone)] -pub struct DispenserInstanceConfigEntry { - pub path: PathBuf, - #[serde(default)] - images: Vec, - #[serde(default)] - pub cron: Option, - /// Defines when the service should be initialized. - /// - /// - `Immediately` (default): The service is started as soon as the application starts. - /// - `OnTrigger`: The service is started only when a trigger occurs (e.g., a cron schedule or a detected image update). - #[serde(default)] - pub initialize: Initialize, -} - -#[derive(Debug, serde::Deserialize, Clone)] -struct Image { - registry: String, - name: String, - tag: String, -} - -impl DispenserConfigFile { - pub async fn into_config(self) -> crate::config::ContposeConfig { - let vars = crate::config::DispenserVars { - inner: Arc::new(self.vars.inner), - }; - let instances = self - .instance - .into_iter() - .map(|instance| crate::config::ContposeInstanceConfig { - path: instance.path, - images: instance - .images - .into_iter() - .map(|image| crate::config::Image { - registry: image.registry, - name: image.name, - tag: image.tag, - }) - .collect(), - cron: instance.cron, - initialize: instance.initialize.into(), - vars: vars.clone(), - }) - .collect(); - - crate::config::ContposeConfig { - delay: self.delay, - instances, - } - } -} - -#[cfg(test)] -mod tests { - use super::*; - use std::collections::HashMap; - - #[test] - fn test_vars_parsing() { - let input = r#" - var1 = "value1" - var2 = "value2" - "#; - let vars = DispenserVars::try_init_from_string(input).expect("Failed to parse vars"); - let get_val = |k| match vars.inner.get(k) { - Some(DispenserVarEntry::Raw(s)) => Some(s.as_str()), - _ => None, - }; - assert_eq!(get_val("var1"), Some("value1")); - assert_eq!(get_val("var2"), Some("value2")); - } - - #[tokio::test] - async fn test_config_loading_with_templates() { - let vars_input = r#" - delay_ms = "500" - base_path = "/app" - img_version = "1.2.3" - "#; - let vars = DispenserVars::try_init_from_string(vars_input) - .unwrap() - .materialize() - .await; - - let config_input = r#" - delay = ${ delay_ms } - [[instance]] - path = "${ base_path }/service" - initialize = "on-trigger" - - [[instance.images]] - registry = "hub" - name = "service" - tag = "${ img_version }" - "#; - - let config = DispenserConfigFile::try_init_from_string(config_input.to_string(), vars) - .expect("Failed to parse config"); - - assert_eq!(config.delay.get(), 500); - assert_eq!(config.instance.len(), 1); - - let instance = &config.instance[0]; - assert_eq!(instance.path.to_str(), Some("/app/service")); - assert_eq!(instance.initialize, Initialize::OnTrigger); - - assert_eq!(instance.images.len(), 1); - assert_eq!(instance.images[0].tag, "1.2.3"); - } - - #[test] - fn test_initialization_modes() { - let vars = DispenserVarsMaterialized { - inner: HashMap::new(), - }; - - // Test default - let default_config = r#" - delay = 1 - [[instance]] - path = "." - "#; - let cfg = - DispenserConfigFile::try_init_from_string(default_config.to_string(), vars.clone()) - .unwrap(); - assert_eq!(cfg.instance[0].initialize, Initialize::Immediately); - - // Test aliases - let aliases = vec![ - ("immediately", Initialize::Immediately), - ("Immediately", Initialize::Immediately), - ("on-trigger", Initialize::OnTrigger), - ("OnTrigger", Initialize::OnTrigger), - ("on_trigger", Initialize::OnTrigger), - ("on trigger", Initialize::OnTrigger), - ]; - - for (alias, expected) in aliases { - let toml = format!( - r#" - delay = 1 - [[instance]] - path = "." - initialize = "{}" - "#, - alias - ); - let cfg = DispenserConfigFile::try_init_from_string(toml, vars.clone()).unwrap(); - assert_eq!(cfg.instance[0].initialize, expected); - } - } - - #[test] - fn test_template_failure() { - let vars = DispenserVarsMaterialized { - inner: HashMap::new(), - }; - let config = r#" - delay = 1 - [[instance]] - path = "${ non_existent }" - "#; - let res = DispenserConfigFile::try_init_from_string(config.to_string(), vars.clone()); - assert!( - matches!(res, Err(DispenserConfigError::Template(_))), - "{:?}", - res - ); - } -} diff --git a/src/instance.rs b/src/instance.rs deleted file mode 100644 index cb5c5cc..0000000 --- a/src/instance.rs +++ /dev/null @@ -1,97 +0,0 @@ -use chrono::{DateTime, Local}; -use cron::Schedule; -use futures_util::future; - -use crate::config::ContposeInstanceConfig; -use crate::manifests::{DockerWatcher, DockerWatcherStatus}; -use crate::master::{Action, DockerComposeMaster, MasterMsg}; -use std::sync::Arc; -use tokio::sync::Mutex; - -#[derive(Clone, Default)] -pub struct Instances { - pub inner: Vec>>, - pub delay: std::time::Duration, -} - -struct CronWatcher { - schedule: Schedule, - next: Option>, -} - -impl CronWatcher { - fn new(schedule: &Schedule) -> Self { - let schedule = schedule.clone(); - let next = schedule.upcoming(Local).next(); - Self { schedule, next } - } - fn is_ready(&mut self) -> bool { - match self.next { - Some(next) if chrono::Local::now() >= next => { - self.next = self.schedule.upcoming(Local).next(); - true - } - Some(_) | None => false, - } - } -} - -pub struct Instance { - pub master: Arc, - watchers: Vec, - pub config: ContposeInstanceConfig, - cron_watcher: Option, -} - -impl Instance { - pub async fn new(config: ContposeInstanceConfig) -> Self { - // Create a docker-compose master. - // This represents a process that manages - // when docker compose is lifted or destroyed - let cron_watcher = config.cron.as_ref().map(CronWatcher::new); - let master = Arc::new(DockerComposeMaster::initialize( - &config.path, - config.initialize, - config.vars.clone(), - )); - let watchers = config.get_watchers().await; - Self { - master, - config, - watchers, - cron_watcher, - } - } - pub async fn poll(&mut self, poll_images: bool) { - // If uses cron - if let Some(cron_watcher) = &mut self.cron_watcher { - if cron_watcher.is_ready() { - self.master.send_msg(MasterMsg::Update(Action::Recreate)); - log::info!( - "Triggering {:?}! Next scheduled trigger at {:?}", - self.config.path, - cron_watcher.next - ); - // If the cron matches we can short cirtcuit the function - return; - } - } - - // If its ready to poll images - if poll_images { - // try to update the watchers and check - // if any of them were updated - let update_futures = self.watchers.iter().map(|img| img.update()); - let updates = future::join_all(update_futures).await; - let any_updated = updates - .into_iter() - .any(|status| matches!(status, DockerWatcherStatus::Updated)); - - // If any of the watchers were updated then we - // send a message to the master to update - if any_updated { - self.master.send_msg(MasterMsg::Update(Action::Update)); - } - } - } -} diff --git a/src/main.rs b/src/main.rs index 1a79bba..9ac4400 100644 --- a/src/main.rs +++ b/src/main.rs @@ -1,37 +1,36 @@ -use config_file::DispenserConfigFile; use std::{process::ExitCode, sync::Arc}; -use tokio::sync::Mutex; -use crate::instance::Instances; +use crate::service::{ + file::EntrypointFile, + manager::{ServiceMangerConfig, ServicesManager}, + vars::ServiceConfigError, +}; +use tokio::sync::Mutex; mod cli; -mod config; -mod config_file; -mod instance; -mod manifests; -mod master; mod secrets; +mod service; mod signals; -const LOOP_INTERVAL: std::time::Duration = std::time::Duration::from_millis(500); - #[tokio::main] async fn main() -> ExitCode { if let Some(signal) = &cli::get_cli_args().signal { return signals::send_signal(signal.clone()); } - - let config_file = match DispenserConfigFile::try_init().await { - Ok(config_file) => config_file, + let service_manager_config = match ServiceMangerConfig::try_init().await { + Ok(conf) => conf, Err(e) => { - eprintln!("{e:?}"); - // Early return + match e { + ServiceConfigError::Template((path, template_err)) => { + eprintln!("Could not render {path:#?}: {:#}", template_err); + } + _ => { + eprintln!("Error initializing service manager: {}", e); + } + } return ExitCode::FAILURE; } }; - // Initialize the loggr - env_logger::Builder::from_env(env_logger::Env::default().default_filter_or("info")).init(); - // If the user set the test flag it // will just validate the config if cli::get_cli_args().test { @@ -39,6 +38,9 @@ async fn main() -> ExitCode { return ExitCode::SUCCESS; } + // Initialize the loggr + env_logger::Builder::from_env(env_logger::Env::default().default_filter_or("info")).init(); + log::info!("Dispenser running with PID: {}", std::process::id()); if let Err(err) = std::fs::write( @@ -49,28 +51,54 @@ async fn main() -> ExitCode { return ExitCode::FAILURE; } - let config = config_file.into_config().await; + let manager = match ServicesManager::from_config(service_manager_config).await { + Ok(manager) => Arc::new(manager), + Err(e) => { + log::error!("Failed to create services manager: {e}"); + return ExitCode::FAILURE; + } + }; + + // Wrap the manager in a Mutex so we can replace it on reload + let manager_holder = Arc::new(Mutex::new(manager)); - let instances = Arc::new(Mutex::new(Instances::default())); - // We need to initialize the reload and interrupt handlers - signals::handle_reload(instances.clone(), tokio::runtime::Handle::current()); - signals::handle_sigint(instances.clone()); - // Override the instances - *instances.lock().await = config.get_instances().await; - let mut last_image_poll = std::time::Instant::now(); + // Create a notification channel for reload signals + let reload_signal = Arc::new(tokio::sync::Notify::new()); + let shutdown_signal = Arc::new(tokio::sync::Notify::new()); + // Initialize signal handlers for the new system + signals::handle_reload(reload_signal.clone()); + signals::handle_sigint(shutdown_signal.clone()); + + let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Ready]); + + // Main loop: start polling and wait for reload signals loop { - let instances = instances.lock().await.clone(); - // Check if enough time has passed to re poll the images - let poll_images = last_image_poll.elapsed() >= instances.delay; - if poll_images { - last_image_poll = std::time::Instant::now(); - } - tokio::time::sleep(LOOP_INTERVAL).await; - for instance in instances.inner.into_iter() { - tokio::spawn(async move { - instance.lock().await.poll(poll_images).await; - }); + let current_manager = manager_holder.lock().await.clone(); + + tokio::select! { + _ = current_manager.start_polling() => { + // Polling ended normally (shouldn't happen unless cancelled) + log::info!("Polling ended"); + } + _ = reload_signal.notified() => { + // Reload signal received + if let Err(e) = signals::reload_manager(manager_holder.clone()).await { + log::error!("Reload failed: {e}"); + // Continue with the old manager + } else { + log::info!("Starting new manager..."); + // Continue the loop with the new manager + } + } + _ = shutdown_signal.notified() => { + // Reload signal received + if let Err(e) = signals::sigint_manager(manager_holder.clone()).await { + log::error!("Shutdown failed: {e}"); + // Continue with the old manager + } + std::process::exit(0); + } } } } diff --git a/src/manifests.rs b/src/manifests.rs deleted file mode 100644 index 4377ed9..0000000 --- a/src/manifests.rs +++ /dev/null @@ -1,157 +0,0 @@ -use std::sync::Arc; -use tokio::{process::Command, sync::Mutex}; - -use thiserror::Error; - -pub type Result = std::result::Result; - -#[derive(Error, Debug)] -pub enum DockerWatcherError { - #[error("Digest string '{0}' does not start with 'sha256:'")] - InvalidDigestPrefix(String), - #[error("JSON deserialization error: {0}")] - SerdeJsonError(#[from] serde_json::Error), - #[error("IO error: {0}")] - IoError(#[from] std::io::Error), - #[error("No digest found for architecture '{architecture}' and OS '{os}'")] - NoMatchingManifest { - architecture: Box, - os: Box, - }, -} - -#[derive(serde::Deserialize)] -pub struct DockerManifestsResponse { - config: Option, - manifests: Option>, -} - -#[derive(serde::Deserialize)] -struct Config { - digest: String, -} - -impl DockerManifestsResponse { - pub fn get_digest(&self, architecture: &str, os: &str) -> Result { - if let Some(config) = self.config.as_ref() { - let mut inner = [0u8; 64]; - inner.copy_from_slice( - config - .digest - .strip_prefix("sha256:") - .ok_or_else(|| DockerWatcherError::InvalidDigestPrefix(config.digest.clone()))? - .as_bytes(), - ); - return Ok(Sha256 { inner }); - } - if let Some(manifests) = self.manifests.as_ref() { - for man in manifests { - if man.platform.architecture == architecture && man.platform.os == os { - let mut inner = [0u8; 64]; - inner.copy_from_slice( - man.digest - .strip_prefix("sha256:") - .ok_or_else(|| { - DockerWatcherError::InvalidDigestPrefix(man.digest.clone()) - })? - .as_bytes(), - ); - return Ok(Sha256 { inner }); - } - } - } - Err(DockerWatcherError::NoMatchingManifest { - architecture: architecture.into(), - os: os.into(), - }) - } -} - -#[derive(serde::Deserialize)] -struct Platform { - architecture: String, - os: String, -} - -#[derive(serde::Deserialize)] -struct Manifest { - digest: String, - platform: Platform, -} - -#[derive(Copy, Clone, PartialEq, Eq)] -pub struct Sha256 { - /// 256 bits of data in base64 - pub inner: [u8; 64], -} - -#[derive(Clone)] -pub struct DockerWatcher { - registry: Box, - image: Box, - tag: Box, - last_digest: Arc>>, -} - -#[derive(Debug, Copy, Clone)] -pub enum DockerWatcherStatus { - NotUpdated, - Updated, - Deleted, -} - -impl DockerWatcher { - pub async fn initialize(registry: &str, image: &str, tag: &str) -> Self { - log::info!("Initializing watch for {registry}/{image}:{tag}"); - let last_digest = Arc::new(Mutex::new( - match get_latest_digest(registry, image, tag).await { - Ok(digest) => Some(digest), - Err(e) => { - log::warn!("{e}"); - None - } - }, - )); - - let registry = registry.into(); - let image = image.into(); - let tag = tag.into(); - DockerWatcher { - registry, - image, - last_digest, - tag, - } - } - pub async fn update(&self) -> DockerWatcherStatus { - let last_digest = *self.last_digest.lock().await; - let new_sha256 = get_latest_digest(&self.registry, &self.image, &self.tag).await; - match new_sha256 { - Err(e) => { - log::warn!("{e}"); - DockerWatcherStatus::Deleted - } - Ok(new_sha256) if last_digest == Some(new_sha256) => DockerWatcherStatus::NotUpdated, - Ok(new_sha256) => { - let mut last_digest = self.last_digest.lock().await; - *last_digest = Some(new_sha256); - log::info!( - "Found a new version for {}:{}, update will start soon...", - self.image, - self.tag - ); - DockerWatcherStatus::Updated - } - } - } -} - -async fn get_latest_digest(registry: &str, image: &str, tag: &str) -> Result { - let output_result = Command::new("docker") - .args(["manifest", "inspect"]) - .arg(format!("{registry}/{image}:{tag}")) - .output() - .await?; - let val: DockerManifestsResponse = serde_json::from_slice(&output_result.stdout)?; - val.get_digest("amd64", "linux") -} diff --git a/src/master.rs b/src/master.rs deleted file mode 100644 index fb56792..0000000 --- a/src/master.rs +++ /dev/null @@ -1,168 +0,0 @@ -use std::{ - path::Path, - process::{Command, Stdio}, - sync::{ - atomic::{AtomicU32, Ordering}, - mpsc::Sender, - Arc, - }, - thread::JoinHandle, -}; - -use crate::config::{DispenserVars, Initialize}; - -#[derive(Clone, Copy, Eq, PartialEq)] -#[repr(u32)] -enum MasterStatus { - Stopped = 0, - Reloading = 1, - Started = 2, -} - -impl MasterStatus { - #[inline] - fn from_u32(val: u32) -> MasterStatus { - match val { - 0 => MasterStatus::Stopped, - 1 => MasterStatus::Reloading, - 2 => MasterStatus::Started, - _ => panic!("Impossible"), - } - } - #[inline] - fn into_u32(self) -> u32 { - self as u32 - } -} - -struct AtomicMasterStatus(AtomicU32); - -impl AtomicMasterStatus { - fn new(val: MasterStatus) -> Self { - AtomicMasterStatus(AtomicU32::new(val as u32)) - } - fn load(&self, ordering: Ordering) -> MasterStatus { - MasterStatus::from_u32(self.0.load(ordering)) - } - fn store(&self, value: MasterStatus, ordering: Ordering) { - self.0.store(value.into_u32(), ordering) - } -} - -pub struct DockerComposeMaster { - update_msg: Sender, - watcher_thread: Option>, - status: Arc, -} - -impl Drop for DockerComposeMaster { - fn drop(&mut self) { - // Wait for thread to stop - self.watcher_thread.take().map(|thread| thread.join()); - } -} - -pub enum MasterMsg { - Detach, - Update(Action), - Stop, -} - -#[derive(Debug, Clone, Copy, PartialEq, Eq)] -pub enum Action { - Update, - Recreate, -} - -impl Action { - fn flags(self) -> &'static [&'static str] { - match self { - Action::Update => &[], - Action::Recreate => &["--force-recreate"], - } - } -} - -impl DockerComposeMaster { - pub fn is_stopped(&self) -> bool { - self.status.load(Ordering::SeqCst) == MasterStatus::Stopped - } - pub fn send_msg(&self, msg: MasterMsg) { - let _ = self.update_msg.send(msg); - } - pub fn initialize(path: impl AsRef, initialize: Initialize, vars: DispenserVars) -> Self { - let status_shared = Arc::new(AtomicMasterStatus::new(MasterStatus::Stopped)); - let status = Arc::clone(&status_shared); - let (update_msg, update_recv) = std::sync::mpsc::channel::(); - - if matches!(initialize, Initialize::Immediately) { - let _ = update_msg.send(MasterMsg::Update(Action::Recreate)); - } - - let path: Box = path.as_ref().into(); - let watch_fn = { - let path = path.clone(); - move || loop { - // Wait for an update msg before restarting the loop - match update_recv.recv().expect("Broken pipe") { - MasterMsg::Update(action) => { - match action { - Action::Update => log::info!("Received update directive. Composing the updated services at {path:?}..."), - Action::Recreate => log::info!("Received run/restart directive. Recreating the updated services at {path:?}..."), - }; - let exit_status = Command::new("docker") - .arg("compose") - .arg("up") - .args(["--pull", "always"]) - .args(action.flags()) - .arg("-d") - .current_dir(&path) - .envs(vars.inner.iter()) - .stdin(Stdio::null()) - .stdout(Stdio::null()) - .stderr(Stdio::null()) - .status(); - match exit_status { - Ok(es) if es.success() => { - log::info!("Services for {path:?} are up and running!"); - status_shared.store(MasterStatus::Started, Ordering::SeqCst); - } - Ok(es) => log::warn!( - "Docker compose up at {path:?} not successful exit with code {:?}", - es.code() - ), - Err(e) => { - log::error!("Failed to invoce docker compose at {path:?}: {}", e); - } - } - } - MasterMsg::Stop => { - log::warn!("Received stop signal for instace {path:?}"); - let _ = Command::new("docker") - .arg("compose") - .arg("stop") - .current_dir(&path) - .stdin(Stdio::null()) - .stdout(Stdio::null()) - .stderr(Stdio::null()) - .status(); - log::warn!("Stopped the compose service at {path:?}"); - status_shared.store(MasterStatus::Stopped, Ordering::SeqCst); - break; - } - MasterMsg::Detach => { - log::warn!("Detaching from docker compose at {path:?}"); - status_shared.store(MasterStatus::Stopped, Ordering::SeqCst); - break; - } - } - } - }; - let watcher_thread = Some(std::thread::spawn(watch_fn)); - DockerComposeMaster { - watcher_thread, - update_msg, - status, - } - } -} diff --git a/src/service/file.rs b/src/service/file.rs new file mode 100644 index 0000000..fcc8632 --- /dev/null +++ b/src/service/file.rs @@ -0,0 +1,200 @@ +use cron::Schedule; +use serde::{Deserialize, Serialize}; +use std::{collections::HashMap, path::PathBuf}; + +use super::vars::{render_template, ServiceConfigError, ServiceVarsMaterialized}; + +#[derive(Debug, Serialize, Deserialize)] +pub struct EntrypointFile { + #[serde(rename = "service")] + pub services: Vec, + #[serde(rename = "network")] + pub networks: Vec, + /// Delay in seconds between polling for new images (default: 60) + #[serde(default = "default_delay")] + pub delay: u64, +} + +fn default_delay() -> u64 { + 60 +} + +impl EntrypointFile { + pub async fn try_init(vars: &ServiceVarsMaterialized) -> Result { + use std::io::Read; + let mut config = String::new(); + let path = crate::cli::get_cli_args().config.clone(); + std::fs::File::open(&path)?.read_to_string(&mut config)?; + + // Render the template with variables + let rendered_config = + render_template(&config, &vars).map_err(|e| ServiceConfigError::Template((path, e)))?; + + // Parse the rendered config as TOML + Ok(toml::from_str(&rendered_config)?) + } +} +#[derive(Debug, Serialize, Deserialize)] +pub struct NetworkDeclarationEntry { + pub name: String, + #[serde(default = "default_network_driver")] + pub driver: NetworkDriver, + #[serde(default = "default_false")] + pub external: bool, + #[serde(default = "default_false")] + pub internal: bool, + #[serde(default = "default_true")] + pub attachable: bool, + #[serde(default)] + pub labels: HashMap, +} + +fn default_network_driver() -> NetworkDriver { + NetworkDriver::Bridge +} + +fn default_false() -> bool { + false +} + +fn default_true() -> bool { + true +} + +#[derive(Debug, Serialize, Deserialize, Default)] +pub enum NetworkDriver { + #[default] + #[serde(alias = "bridge")] + Bridge, + #[serde(alias = "host")] + Host, + #[serde(alias = "overlay")] + Overlay, + #[serde(alias = "macvlan")] + Macvlan, + #[serde(alias = "none")] + None, +} + +#[derive(Debug, Serialize, Deserialize)] +pub struct EntrypointFileEntry { + /// Path to the directory where a service.toml file is found. + /// This toml file should be deserialized into a ServiceFile. + /// This path is relative to the location of EntrypointFile. + pub path: PathBuf, +} + +#[derive(Debug, Serialize, Deserialize)] +pub struct ServiceFile { + pub service: ServiceEntry, + #[serde(default, rename = "port")] + pub ports: Vec, + #[serde(default, rename = "volume")] + pub volume: Vec, + #[serde(default)] + pub env: HashMap, + #[serde(default)] + pub restart: Restart, + #[serde(default)] + pub network: Vec, + pub dispenser: DispenserConfig, + #[serde(default)] + pub depends_on: HashMap, +} + +/// Defines when a service should be initialized. +#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Default)] +pub enum Initialize { + /// The service is started as soon as the application starts. + #[serde(alias = "immediately", alias = "Immediately")] + #[default] + Immediately, + /// The service is started only when a trigger occurs (e.g., a cron schedule or a detected image update). + #[serde( + alias = "on-trigger", + alias = "OnTrigger", + alias = "on_trigger", + alias = "on trigger" + )] + OnTrigger, +} + +#[derive(Debug, Serialize, Deserialize)] +pub enum DependsOnCondition { + #[serde( + alias = "service-started", + alias = "service_started", + alias = "started" + )] + ServiceStarted, + #[serde( + alias = "service-completed", + alias = "service_completed", + alias = "completed" + )] + ServiceCompleted, +} + +#[derive(Debug, Serialize, Deserialize)] +pub struct DispenserConfig { + pub watch: bool, + #[serde(default)] + pub initialize: Initialize, + pub cron: Option, +} + +#[derive(Debug, Serialize, Deserialize)] +pub struct Network { + pub name: String, +} + +#[derive(Debug, Serialize, Deserialize, Default)] +pub enum Restart { + #[serde(alias = "always")] + Always, + #[default] + #[serde(alias = "no", alias = "never")] + No, + #[serde(alias = "on-failure", alias = "on_failure", alias = "onfailure")] + OnFailure, + #[serde( + alias = "unless-stopped", + alias = "unless_stopped", + alias = "unlessstopped" + )] + UnlessStopped, +} + +#[derive(Debug, Serialize, Deserialize)] +pub struct PortEntry { + pub host: u16, + pub container: u16, +} + +#[derive(Debug, Serialize, Deserialize)] +pub struct VolumeEntry { + pub source: String, + pub target: String, + #[serde(default)] + pub readonly: bool, +} + +#[derive(Debug, Serialize, Deserialize)] +pub struct ServiceEntry { + pub name: String, + pub image: String, + #[serde(default)] + pub command: Option>, + #[serde(default)] + pub entrypoint: Option>, + #[serde(default)] + pub working_dir: Option, + #[serde(default)] + pub user: Option, + #[serde(default)] + pub hostname: Option, + /// Memory limit (e.g., "512m", "2g") + pub memory: Option, + /// Number of CPUs (e.g., "1.5", "2") + pub cpus: Option, +} diff --git a/src/service/instance.rs b/src/service/instance.rs new file mode 100644 index 0000000..bb6a908 --- /dev/null +++ b/src/service/instance.rs @@ -0,0 +1,628 @@ +use std::{collections::HashMap, path::PathBuf, time::Duration}; + +use chrono::{DateTime, Local}; +use cron::Schedule; + +use crate::service::{ + file::{ + DependsOnCondition, DispenserConfig, Initialize, Network, PortEntry, Restart, ServiceEntry, + VolumeEntry, + }, + manifest::{ImageWatcher, ImageWatcherStatus}, +}; + +pub struct CronWatcher { + schedule: Schedule, + next: Option>, +} + +impl CronWatcher { + pub fn new(schedule: &Schedule) -> Self { + let schedule = schedule.clone(); + let next = schedule.upcoming(Local).next(); + Self { schedule, next } + } + fn is_ready(&mut self) -> bool { + match self.next { + Some(next) if chrono::Local::now() >= next => { + self.next = self.schedule.upcoming(Local).next(); + true + } + Some(_) | None => false, + } + } +} + +pub struct ServiceInstance { + pub dir: PathBuf, + pub service: ServiceEntry, + pub ports: Vec, + pub volume: Vec, + pub env: HashMap, + pub restart: Restart, + pub network: Vec, + pub dispenser: DispenserConfig, + pub depends_on: HashMap, + pub cron_watcher: Option, + pub image_watcher: Option, +} + +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum ContainerStatus { + Running, + Exited(i32), + NotFound, +} + +/// Parse memory string (e.g., "512m", "2g") to bytes +fn parse_memory_to_bytes(memory_str: &str) -> i64 { + let memory_str = memory_str.trim().to_lowercase(); + let (value, unit) = if memory_str.ends_with("k") || memory_str.ends_with("kb") { + let val = memory_str.trim_end_matches("kb").trim_end_matches("k"); + (val, 1024i64) + } else if memory_str.ends_with("m") || memory_str.ends_with("mb") { + let val = memory_str.trim_end_matches("mb").trim_end_matches("m"); + (val, 1024i64 * 1024) + } else if memory_str.ends_with("g") || memory_str.ends_with("gb") { + let val = memory_str.trim_end_matches("gb").trim_end_matches("g"); + (val, 1024i64 * 1024 * 1024) + } else if memory_str.ends_with("b") { + let val = memory_str.trim_end_matches("b"); + (val, 1i64) + } else { + // Assume bytes if no unit + (memory_str.as_str(), 1i64) + }; + + value.parse::().unwrap_or(0) * unit +} + +/// Parse CPU string (e.g., "1.5", "2") to nano CPUs (1 CPU = 1e9 nano CPUs) +fn parse_cpus_to_nano(cpus_str: &str) -> i64 { + let cpus: f64 = cpus_str.trim().parse().unwrap_or(0.0); + (cpus * 1_000_000_000.0) as i64 +} + +/// This function queries the status of a container +/// Returns whether it's up, exited successfully (0 exit status), or failed +async fn get_container_status(container_name: &str) -> Result { + let output = tokio::process::Command::new("docker") + .args([ + "inspect", + "--format", + "{{.State.Status}},{{.State.ExitCode}}", + container_name, + ]) + .output() + .await?; + + if !output.status.success() { + return Ok(ContainerStatus::NotFound); + } + + let status_str = String::from_utf8_lossy(&output.stdout); + let parts: Vec<&str> = status_str.trim().split(',').collect(); + + match parts.as_slice() { + [status, _exit_code] if *status == "running" => Ok(ContainerStatus::Running), + [_, exit_code] => { + let code = exit_code.parse::().unwrap_or(-1); + Ok(ContainerStatus::Exited(code)) + } + _ => Ok(ContainerStatus::NotFound), + } +} + +impl ServiceInstance { + pub async fn run_container(&self) -> Result<(), std::io::Error> { + let mut depends_on_conditions = Vec::with_capacity(self.depends_on.len()); + loop { + for (container, condition) in &self.depends_on { + let status = match get_container_status(container).await { + Ok(status) => match condition { + DependsOnCondition::ServiceStarted => { + matches!(status, ContainerStatus::Running) + } + DependsOnCondition::ServiceCompleted => { + matches!(status, ContainerStatus::Exited(0)) + } + }, + Err(_) => false, + }; + if !status { + log::info!( + "Service {} is waiting for {} ({:?})", + self.service.name, + container, + condition + ); + } + depends_on_conditions.push(status) + } + if depends_on_conditions.iter().all(|&c| c) { + break; + } + depends_on_conditions.clear(); + tokio::time::sleep(Duration::from_secs(1)).await; + } + + if let Err(e) = self.pull_image().await { + log::error!("Failed to pull image for {}: {}", self.service.name, e); + } + self.recreate_if_required().await; + + let output = tokio::process::Command::new("docker") + .args(["start", &self.service.name]) + .output() + .await?; + + if output.status.success() { + log::info!("Container {} started successfully", self.service.name); + Ok(()) + } else { + let error_msg = String::from_utf8_lossy(&output.stderr); + log::error!( + "Failed to start container {}: {}", + self.service.name, + error_msg + ); + Err(std::io::Error::new( + std::io::ErrorKind::Other, + format!("Failed to start container: {}", error_msg), + )) + } + } + pub async fn pull_image(&self) -> Result<(), std::io::Error> { + log::info!("Pulling image: {}", self.service.image); + let output = tokio::process::Command::new("docker") + .args(["pull", &self.service.image]) + .output() + .await?; + + if output.status.success() { + log::info!("Image {} pulled successfully", self.service.image); + Ok(()) + } else { + let error_msg = String::from_utf8_lossy(&output.stderr); + log::error!("Failed to pull image {}: {}", self.service.image, error_msg); + Err(std::io::Error::new( + std::io::ErrorKind::Other, + format!("Failed to pull image: {}", error_msg), + )) + } + } + + pub async fn stop_container(&self) -> Result<(), std::io::Error> { + log::info!("Stopping container: {}", self.service.name); + let output = tokio::process::Command::new("docker") + .args(["stop", &self.service.name]) + .output() + .await?; + + if output.status.success() { + log::info!("Container {} stopped successfully", self.service.name); + Ok(()) + } else { + let error_msg = String::from_utf8_lossy(&output.stderr); + log::warn!( + "Failed to stop container {}: {}", + self.service.name, + error_msg + ); + Err(std::io::Error::new( + std::io::ErrorKind::Other, + format!("Failed to warn container: {}", error_msg), + )) + } + } + + pub async fn remove_container(&self) -> Result<(), std::io::Error> { + log::info!("Removing container: {}", self.service.name); + let output = tokio::process::Command::new("docker") + .args(["rm", "-f", &self.service.name]) + .output() + .await?; + + if output.status.success() { + log::info!("Container {} removed successfully", self.service.name); + Ok(()) + } else { + let error_msg = String::from_utf8_lossy(&output.stderr); + log::error!( + "Failed to remove container {}: {}", + self.service.name, + error_msg + ); + Err(std::io::Error::new( + std::io::ErrorKind::Other, + format!("Failed to remove container: {}", error_msg), + )) + } + } + + pub async fn create_container(&self) -> Result<(), std::io::Error> { + log::info!("Creating container: {}", self.service.name); + + let mut cmd = tokio::process::Command::new("docker"); + cmd.arg("create"); + cmd.args(["--name", &self.service.name]); + + // Add restart policy + match self.restart { + Restart::Always => cmd.args(["--restart", "always"]), + Restart::No => cmd.args(["--restart", "no"]), + Restart::OnFailure => cmd.args(["--restart", "on-failure"]), + Restart::UnlessStopped => cmd.args(["--restart", "unless-stopped"]), + }; + + // Add port mappings + for port in &self.ports { + cmd.args(["-p", &format!("{}:{}", port.host, port.container)]); + } + + // Add volume mappings + for volume in &self.volume { + let mount_str = if volume.readonly { + format!("{}:{}:ro", volume.source, volume.target) + } else { + format!("{}:{}", volume.source, volume.target) + }; + cmd.args(["-v", &mount_str]); + } + + // Add environment variables + for (key, value) in &self.env { + cmd.args(["-e", &format!("{}={}", key, value)]); + } + + // Add networks + for network in &self.network { + cmd.args(["--network", &network.name]); + } + + // Add resource limits + if let Some(memory) = &self.service.memory { + cmd.args(["--memory", memory]); + } + if let Some(cpus) = &self.service.cpus { + cmd.args(["--cpus", cpus]); + } + + // Add working directory + if let Some(working_dir) = &self.service.working_dir { + cmd.args(["--workdir", working_dir]); + } + + // Add user + if let Some(user) = &self.service.user { + cmd.args(["--user", user]); + } + + // Add hostname + if let Some(hostname) = &self.service.hostname { + cmd.args(["--hostname", hostname]); + } + + // Add entrypoint if specified + if let Some(entrypoint) = &self.service.entrypoint { + cmd.arg("--entrypoint"); + cmd.arg(entrypoint.join(" ")); + } + + // Add the image + cmd.arg(&self.service.image); + + if let Some(command) = &self.service.command { + cmd.args(command); + } + + // Set the directory for the command + cmd.current_dir(&self.dir); + + let output = cmd.output().await?; + + if output.status.success() { + log::info!("Container {} created successfully", self.service.name); + Ok(()) + } else { + let error_msg = String::from_utf8_lossy(&output.stderr); + log::error!( + "Failed to create container {}: {}", + self.service.name, + error_msg + ); + Err(std::io::Error::new( + std::io::ErrorKind::Other, + format!("Failed to create container: {}", error_msg), + )) + } + } + + pub async fn recreate_container(&self) -> Result<(), std::io::Error> { + self.pull_image().await?; + let _ = self.stop_container().await; + let _ = self.remove_container().await; + self.create_container().await?; + Ok(()) + } + + /// Validate if the current container is different from + /// this instance or if it does not exist. + /// + /// If anything has changed like: environment variables, volumes, ports, etc we need to recreate + pub async fn requires_recreate(&self) -> bool { + // Get the container inspection data + let output = match tokio::process::Command::new("docker") + .args(["inspect", "--format", "{{json .}}", &self.service.name]) + .output() + .await + { + Ok(output) => output, + Err(e) => { + log::warn!("Failed to inspect container {}: {}", self.service.name, e); + return true; // If we can't inspect, assume recreate is needed + } + }; + + if !output.status.success() { + log::info!( + "Container {} does not exist, needs creation", + self.service.name + ); + return true; + } + + let inspect_str = String::from_utf8_lossy(&output.stdout); + let inspect_json: serde_json::Value = match serde_json::from_str(&inspect_str) { + Ok(json) => json, + Err(e) => { + log::warn!("Failed to parse docker inspect JSON: {}", e); + return true; + } + }; + + // Check if the image has changed + let current_image = inspect_json["Config"]["Image"].as_str().unwrap_or(""); + if current_image != self.service.image { + log::info!( + "Image changed for {}: {} -> {}", + self.service.name, + current_image, + self.service.image + ); + return true; + } + + // Check restart policy + let current_restart = inspect_json["HostConfig"]["RestartPolicy"]["Name"] + .as_str() + .unwrap_or(""); + let expected_restart = match self.restart { + Restart::Always => "always", + Restart::No => "no", + Restart::OnFailure => "on-failure", + Restart::UnlessStopped => "unless-stopped", + }; + if current_restart != expected_restart { + log::info!( + "Restart policy changed for {}: {} -> {}", + self.service.name, + current_restart, + expected_restart + ); + return true; + } + + // Check environment variables + if let Some(current_env) = inspect_json["Config"]["Env"].as_array() { + let mut current_env_map = HashMap::new(); + for env_str in current_env { + if let Some(s) = env_str.as_str() { + if let Some(pos) = s.find('=') { + let (key, value) = s.split_at(pos); + current_env_map.insert(key.to_string(), value[1..].to_string()); + } + } + } + + for (key, value) in &self.env { + if current_env_map.get(key) != Some(value) { + log::info!( + "Environment variable changed for {}: {}", + self.service.name, + key + ); + return true; + } + } + } + + // Check port bindings + if let Some(port_bindings) = inspect_json["HostConfig"]["PortBindings"].as_object() { + for port in &self.ports { + let container_port_key = format!("{}/tcp", port.container); + if let Some(bindings) = port_bindings.get(&container_port_key) { + if let Some(binding_array) = bindings.as_array() { + if binding_array.is_empty() { + log::info!("Port binding changed for {}", self.service.name); + return true; + } + let host_port = binding_array[0]["HostPort"].as_str().unwrap_or(""); + if host_port != port.host.to_string() { + log::info!( + "Port mapping changed for {}: {} -> {}", + self.service.name, + host_port, + port.host + ); + return true; + } + } + } else { + log::info!("Port binding missing for {}", self.service.name); + return true; + } + } + } else if !self.ports.is_empty() { + log::info!("Port bindings changed for {}", self.service.name); + return true; + } + + // Check volume bindings + if let Some(binds) = inspect_json["HostConfig"]["Binds"].as_array() { + let current_binds: Vec = binds + .iter() + .filter_map(|v| v.as_str().map(String::from)) + .collect(); + + for volume in &self.volume { + // Normalize the source path to an absolute path for comparison + let source_path = if std::path::Path::new(&volume.source).is_relative() { + self.dir + .join(&volume.source) + .canonicalize() + .unwrap_or_else(|_| self.dir.join(&volume.source)) + .to_string_lossy() + .to_string() + } else { + volume.source.clone() + }; + + let expected_bind = format!("{}:{}", source_path, volume.target); + if !current_binds.iter().any(|b| b == &expected_bind) { + log::info!( + "Volume binding changed for {}: {}", + self.service.name, + expected_bind + ); + return true; + } + } + } else if !self.volume.is_empty() { + log::info!("Volume bindings changed for {}", self.service.name); + return true; + } + + // Check networks + if let Some(networks) = inspect_json["NetworkSettings"]["Networks"].as_object() { + for network in &self.network { + if !networks.contains_key(&network.name) { + log::info!( + "Network changed for {}: {}", + self.service.name, + network.name + ); + return true; + } + } + } else if !self.network.is_empty() { + log::info!("Networks changed for {}", self.service.name); + return true; + } + + // Check memory limit + if let Some(expected_memory) = &self.service.memory { + let current_memory = inspect_json["HostConfig"]["Memory"].as_i64().unwrap_or(0); + // Parse expected memory string (e.g., "512m", "2g") to bytes + let expected_bytes = parse_memory_to_bytes(expected_memory); + if current_memory != expected_bytes { + log::info!( + "Memory limit changed for {}: {} -> {}", + self.service.name, + current_memory, + expected_bytes + ); + return true; + } + } else { + // Check if container has a memory limit but we don't expect one + let current_memory = inspect_json["HostConfig"]["Memory"].as_i64().unwrap_or(0); + if current_memory != 0 { + log::info!("Memory limit changed for {} (removed)", self.service.name); + return true; + } + } + + // Check CPU limit + if let Some(expected_cpus) = &self.service.cpus { + let current_cpus = inspect_json["HostConfig"]["NanoCpus"].as_i64().unwrap_or(0); + // Parse expected CPUs string to nano CPUs (1 CPU = 1e9 nano CPUs) + let expected_nano_cpus = parse_cpus_to_nano(expected_cpus); + if current_cpus != expected_nano_cpus { + log::info!( + "CPU limit changed for {}: {} -> {}", + self.service.name, + current_cpus, + expected_nano_cpus + ); + return true; + } + } else { + // Check if container has a CPU limit but we don't expect one + let current_cpus = inspect_json["HostConfig"]["NanoCpus"].as_i64().unwrap_or(0); + if current_cpus != 0 { + log::info!("CPU limit changed for {} (removed)", self.service.name); + return true; + } + } + + false + } + + pub async fn recreate_if_required(&self) { + if self.requires_recreate().await { + if let Err(e) = self.recreate_container().await { + log::error!("Failed to recreate container {}: {}", self.service.name, e); + } + } + } + + pub async fn poll(&mut self, poll_images: bool, init: bool) { + if init && self.dispenser.initialize == Initialize::Immediately { + log::info!("Starting {} immediately", self.service.name); + if let Err(e) = self.run_container().await { + log::error!("Failed to run container {}: {}", self.service.name, e); + } + return; + } + + // If uses cron + if let Some(cron_watcher) = &mut self.cron_watcher { + if cron_watcher.is_ready() { + // If the cron matches we can short circuit the function + if let Err(e) = self.run_container().await { + log::error!( + "Failed to run container {} from cron: {}", + self.service.name, + e + ); + } + + return; + } + } + + // If its ready to poll images + if self.dispenser.watch && poll_images { + // try to update the watchers and check + // if any of them were updated + if let Some(ref image_watcher) = self.image_watcher { + match image_watcher.update().await { + ImageWatcherStatus::Updated => { + log::info!( + "Image updated for service {}, recreating container...", + self.service.name + ); + if let Err(e) = self.run_container().await { + log::error!("Failed to run container {}: {}", self.service.name, e); + } + } + ImageWatcherStatus::Deleted => { + log::warn!("Image for service {} was deleted", self.service.name); + } + ImageWatcherStatus::NotUpdated => {} + } + } + } + } +} diff --git a/src/service/manager.rs b/src/service/manager.rs new file mode 100644 index 0000000..f5ab924 --- /dev/null +++ b/src/service/manager.rs @@ -0,0 +1,232 @@ +use std::{path::PathBuf, sync::Arc, time::Duration}; + +use futures_util::future; +use tokio::{sync::Mutex, task::JoinSet}; + +use crate::service::{ + file::{EntrypointFile, ServiceFile}, + instance::{CronWatcher, ServiceInstance}, + manifest::ImageWatcher, + network::NetworkInstance, + vars::{render_template, ServiceConfigError, ServiceVarsMaterialized}, +}; + +pub struct ServiceMangerConfig { + entrypoint_file: EntrypointFile, + services: Vec<(PathBuf, ServiceFile)>, +} + +impl ServiceMangerConfig { + pub async fn try_init() -> Result { + // Load and materialize variables + let vars = ServiceVarsMaterialized::try_init().await?; + let entrypoint_file = EntrypointFile::try_init(&vars).await?; + + let mut services = Vec::new(); + + for entry in &entrypoint_file.services { + // Construct the path to service.toml + let service_toml_path = entry.path.join("service.toml"); + + // Read the service.toml file + let service_file_content = tokio::fs::read_to_string(&service_toml_path).await?; + + // Render the template with variables + let rendered_service = render_template(&service_file_content, &vars) + .map_err(|e| ServiceConfigError::Template((service_toml_path.clone(), e)))?; + + // Parse the rendered config as TOML + let service_file: ServiceFile = toml::from_str(&rendered_service)?; + + services.push((entry.path.clone(), service_file)); + } + Ok(Self { + services, + entrypoint_file, + }) + } +} + +struct ServiceManagerInner { + instances: Vec>>, + networks: Vec, + delay: Duration, +} + +pub struct ServicesManager { + pub service_names: Vec, + inner: Mutex, + cancel_tx: tokio::sync::mpsc::Sender<()>, + cancel_rx: Mutex>, +} + +impl ServicesManager { + pub async fn from_config(config: ServiceMangerConfig) -> Result { + // Get the delay from config (in seconds) + let delay = Duration::from_secs(config.entrypoint_file.delay); + let mut instances = Vec::new(); + let mut networks = Vec::new(); + let mut service_names = Vec::new(); + + // Process networks first - create NetworkInstance objects + for network_entry in config.entrypoint_file.networks { + let network = NetworkInstance::from(network_entry); + networks.push(network); + } + + // Ensure all networks exist before creating services + for network in &networks { + if let Err(e) = network.ensure_exists().await { + log::error!("Failed to ensure network {} exists: {}", network.name, e); + return Err(ServiceConfigError::Io(e)); + } + } + + // Iterate through each service entry in the config + let mut join_set = JoinSet::new(); + + for (entry_path, service_file) in config.services { + join_set.spawn(async move { + log::debug!("Initializing config for {entry_path:?}"); + + // Initialize the image watcher if watch is enabled + let image_watcher = if service_file.dispenser.watch { + Some(ImageWatcher::initialize(&service_file.service.image).await) + } else { + None + }; + + // Create cron watcher if cron schedule is specified + let cron_watcher = service_file + .dispenser + .cron + .as_ref() + .map(|schedule| CronWatcher::new(schedule)); + + let service_name = service_file.service.name.clone(); + + // Create the ServiceInstance + let instance = ServiceInstance { + dir: entry_path, + service: service_file.service, + ports: service_file.ports, + volume: service_file.volume, + env: service_file.env, + restart: service_file.restart, + network: service_file.network, + dispenser: service_file.dispenser, + depends_on: service_file.depends_on, + cron_watcher, + image_watcher, + }; + + (service_name, Arc::new(Mutex::new(instance))) + }); + } + + while let Some(result) = join_set.join_next().await { + match result { + Ok((service_name, instance)) => { + service_names.push(service_name); + instances.push(instance); + } + Err(e) => { + log::error!("Failed to initialize service: {}", e); + } + } + } + + // Create the broadcast channel for cancellation + let (cancel_tx, cancel_rx) = tokio::sync::mpsc::channel(1); + let cancel_rx = Mutex::new(cancel_rx); + + let inner = ServiceManagerInner { + instances, + networks, + delay, + }; + + Ok(ServicesManager { + service_names, + inner: Mutex::new(inner), + cancel_tx, + cancel_rx, + }) + } + + pub async fn cancel(&self) { + let _ = self.cancel_tx.send(()); + } + + pub async fn start_polling(&self) { + log::info!("Starting polling task"); + let inner = self.inner.lock().await; + let delay = inner.delay; + + let polls = inner + .instances + .iter() + .map(|instance| { + let instance = Arc::clone(instance); + async move { + let mut last_image_poll = std::time::Instant::now(); + let mut init = true; + loop { + let poll_images = last_image_poll.elapsed() >= delay; + if poll_images { + last_image_poll = std::time::Instant::now(); + } + let poll_start = std::time::Instant::now(); + let mut instance = instance.lock().await; + instance.poll(poll_images, init).await; + let poll_duration = poll_start.elapsed(); + log::debug!( + "Polling for {} took {:?}", + instance.service.name, + poll_duration + ); + init = false; + tokio::time::sleep(Duration::from_secs(1)).await; + } + } + }) + .collect::>(); + + let mut cancel_rx = self.cancel_rx.lock().await; + + tokio::select! { + _ = polls.join_all() => {} + _ = cancel_rx.recv() => { + log::warn!("CANCELLED"); + } + } + } + + /// Clean up networks created by this manager + /// This should be called on shutdown to remove non-external networks + pub async fn cleanup_networks(&self) { + log::info!("Cleaning up networks"); + let inner = self.inner.lock().await; + + for network in &inner.networks { + if let Err(e) = network.remove_network().await { + log::warn!("Failed to remove network {}: {}", network.name, e); + } + } + } + + pub async fn remove_containers(&self, names: Vec) { + let instances = self.inner.lock().await; + for instance in &instances.instances { + let instance = instance.lock().await; + if names.contains(&instance.service.name) { + let _ = instance.stop_container().await; + let _ = instance.remove_container().await; + } + } + } + pub async fn shutdown(&self) { + self.remove_containers(self.service_names.clone()).await; + self.cleanup_networks().await; + } +} diff --git a/src/service/manifest.rs b/src/service/manifest.rs new file mode 100644 index 0000000..ef905cb --- /dev/null +++ b/src/service/manifest.rs @@ -0,0 +1,149 @@ +use std::sync::Arc; +use tokio::{process::Command, sync::Mutex}; + +use thiserror::Error; + +pub type Result = std::result::Result; + +#[derive(Error, Debug)] +pub enum ImageWatcherError { + #[error("Digest string '{0}' does not start with 'sha256:'")] + InvalidDigestPrefix(String), + #[error("JSON deserialization error: {0}")] + SerdeJsonError(#[from] serde_json::Error), + #[error("IO error: {0}")] + IoError(#[from] std::io::Error), + #[error("Docker command failed: {0}")] + DockerCommandFailed(String), +} + +#[derive(serde::Deserialize)] +pub struct DockerInspectResponse { + #[serde(rename = "RepoDigests")] + repo_digests: Option>, + #[serde(rename = "Id")] + id: Option, +} + +impl DockerInspectResponse { + pub fn get_digest(&self) -> Result { + // Try to get digest from RepoDigests first + if let Some(digests) = self.repo_digests.as_ref() { + if let Some(first_digest) = digests.first() { + // RepoDigests format is like "repository@sha256:..." + if let Some(digest_part) = first_digest.split('@').nth(1) { + let hash = digest_part.strip_prefix("sha256:").ok_or_else(|| { + ImageWatcherError::InvalidDigestPrefix(digest_part.to_string()) + })?; + let mut inner = [0u8; 64]; + inner.copy_from_slice(hash.as_bytes()); + return Ok(Sha256 { inner }); + } + } + } + + // Fallback to Id if RepoDigests is not available + if let Some(id) = self.id.as_ref() { + let hash = id + .strip_prefix("sha256:") + .ok_or_else(|| ImageWatcherError::InvalidDigestPrefix(id.clone()))?; + let mut inner = [0u8; 64]; + inner.copy_from_slice(hash.as_bytes()); + return Ok(Sha256 { inner }); + } + + Err(ImageWatcherError::DockerCommandFailed( + "No digest found in inspect output".to_string(), + )) + } +} + +#[derive(Copy, Clone, PartialEq, Eq)] +pub struct Sha256 { + /// 256 bits of data in base64 + pub inner: [u8; 64], +} + +#[derive(Clone)] +pub struct ImageWatcher { + image: Box, + last_digest: Arc>>, +} + +#[derive(Debug, Copy, Clone)] +pub enum ImageWatcherStatus { + NotUpdated, + Updated, + Deleted, +} + +impl ImageWatcher { + pub async fn initialize(image: &str) -> Self { + log::info!("Initializing watch for {image}"); + let last_digest = Arc::new(Mutex::new(match get_latest_digest(image).await { + Ok(digest) => Some(digest), + Err(e) => { + log::warn!("{e}"); + None + } + })); + + let image = image.into(); + ImageWatcher { image, last_digest } + } + pub async fn update(&self) -> ImageWatcherStatus { + let last_digest = *self.last_digest.lock().await; + let new_sha256 = get_latest_digest(&self.image).await; + match new_sha256 { + Err(e) => { + log::warn!("{e}"); + ImageWatcherStatus::Deleted + } + Ok(new_sha256) if last_digest == Some(new_sha256) => ImageWatcherStatus::NotUpdated, + Ok(new_sha256) => { + let mut last_digest = self.last_digest.lock().await; + *last_digest = Some(new_sha256); + log::info!( + "Found a new version for {}, update will start soon...", + self.image, + ); + ImageWatcherStatus::Updated + } + } + } +} + +async fn get_latest_digest(image: &str) -> Result { + // First, pull the latest image + let pull_result = Command::new("docker") + .args(["pull"]) + .arg(image) + .output() + .await?; + + if !pull_result.status.success() { + return Err(ImageWatcherError::DockerCommandFailed( + String::from_utf8_lossy(&pull_result.stderr).to_string(), + )); + } + + // Then, inspect the image to get its digest + let inspect_result = Command::new("docker") + .args(["inspect"]) + .arg(image) + .output() + .await?; + + if !inspect_result.status.success() { + return Err(ImageWatcherError::DockerCommandFailed( + String::from_utf8_lossy(&inspect_result.stderr).to_string(), + )); + } + + let val: Vec = serde_json::from_slice(&inspect_result.stdout)?; + val.first() + .ok_or_else(|| { + ImageWatcherError::DockerCommandFailed("Empty inspect response".to_string()) + })? + .get_digest() +} diff --git a/src/service/mod.rs b/src/service/mod.rs new file mode 100644 index 0000000..559424f --- /dev/null +++ b/src/service/mod.rs @@ -0,0 +1,6 @@ +pub mod file; +pub mod instance; +pub mod manager; +pub mod manifest; +pub mod network; +pub mod vars; diff --git a/src/service/network.rs b/src/service/network.rs new file mode 100644 index 0000000..1bd03a2 --- /dev/null +++ b/src/service/network.rs @@ -0,0 +1,195 @@ +//! Network management module for Docker networks. +//! +//! This module provides functionality to manage Docker networks from the entrypoint configuration. +//! Networks are created before services start and can be cleaned up on shutdown. +//! +//! # Example +//! +//! Networks are defined in the entrypoint file (e.g., `dispenser.toml`): +//! +//! ```toml +//! [[network]] +//! name = "app-network" +//! driver = "bridge" +//! internal = false +//! attachable = true +//! +//! [[network]] +//! name = "external-network" +//! driver = "bridge" +//! external = true # Won't be created, must exist already +//! ``` +//! +//! The `NetworkInstance` struct handles the creation, checking, and removal of networks. +//! Networks marked as `external = true` are expected to already exist and won't be created +//! or removed by the manager. + +use std::collections::HashMap; + +use crate::service::file::{NetworkDeclarationEntry, NetworkDriver}; + +pub struct NetworkInstance { + pub name: String, + pub driver: NetworkDriver, + pub external: bool, + pub internal: bool, + pub attachable: bool, + pub labels: HashMap, +} + +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum NetworkStatus { + Exists, + NotFound, +} + +impl From for NetworkInstance { + fn from(entry: NetworkDeclarationEntry) -> Self { + Self { + name: entry.name, + driver: entry.driver, + external: entry.external, + internal: entry.internal, + attachable: entry.attachable, + labels: entry.labels, + } + } +} + +impl NetworkInstance { + /// Check if a network exists + pub async fn check_network(&self) -> Result { + let output = tokio::process::Command::new("docker") + .args(["network", "inspect", &self.name]) + .output() + .await?; + + if output.status.success() { + Ok(NetworkStatus::Exists) + } else { + Ok(NetworkStatus::NotFound) + } + } + + /// Create the network if it doesn't exist + pub async fn create_network(&self) -> Result<(), std::io::Error> { + // If external, we don't create it - it should already exist + if self.external { + log::info!( + "Network {} is marked as external, skipping creation", + self.name + ); + return Ok(()); + } + + // Check if network already exists + let status = self.check_network().await?; + if status == NetworkStatus::Exists { + log::info!("Network {} already exists, skipping creation", self.name); + return Ok(()); + } + + log::info!("Creating network: {}", self.name); + + let mut cmd = tokio::process::Command::new("docker"); + cmd.args(["network", "create"]); + + // Add driver + let driver_str = match self.driver { + NetworkDriver::Bridge => "bridge", + NetworkDriver::Host => "host", + NetworkDriver::Overlay => "overlay", + NetworkDriver::Macvlan => "macvlan", + NetworkDriver::None => "none", + }; + cmd.args(["--driver", driver_str]); + + // Add internal flag + if self.internal { + cmd.arg("--internal"); + } + + // Add attachable flag (useful for overlay networks) + if self.attachable { + cmd.arg("--attachable"); + } + + // Add labels + for (key, value) in &self.labels { + cmd.args(["--label", &format!("{}={}", key, value)]); + } + + // Add the network name + cmd.arg(&self.name); + + let output = cmd.output().await?; + + if output.status.success() { + log::info!("Network {} created successfully", self.name); + Ok(()) + } else { + let error_msg = String::from_utf8_lossy(&output.stderr); + log::error!("Failed to create network {}: {}", self.name, error_msg); + Err(std::io::Error::new( + std::io::ErrorKind::Other, + format!("Failed to create network: {}", error_msg), + )) + } + } + + /// Remove the network + pub async fn remove_network(&self) -> Result<(), std::io::Error> { + // Don't remove external networks + if self.external { + log::info!( + "Network {} is marked as external, skipping removal", + self.name + ); + return Ok(()); + } + + log::info!("Removing network: {}", self.name); + + let output = tokio::process::Command::new("docker") + .args(["network", "rm", &self.name]) + .output() + .await?; + + if output.status.success() { + log::info!("Network {} removed successfully", self.name); + Ok(()) + } else { + let error_msg = String::from_utf8_lossy(&output.stderr); + log::warn!("Failed to remove network {}: {}", self.name, error_msg); + // Don't return error for removal failures as they might be expected + // (e.g., network still in use by containers) + Ok(()) + } + } + + /// Ensure the network exists (create if needed) + pub async fn ensure_exists(&self) -> Result<(), std::io::Error> { + let status = self.check_network().await?; + + match status { + NetworkStatus::Exists => { + log::debug!("Network {} already exists", self.name); + Ok(()) + } + NetworkStatus::NotFound => { + if self.external { + log::error!( + "External network {} does not exist. Please create it manually.", + self.name + ); + Err(std::io::Error::new( + std::io::ErrorKind::NotFound, + format!("External network {} not found", self.name), + )) + } else { + self.create_network().await + } + } + } + } +} diff --git a/src/service/vars.rs b/src/service/vars.rs new file mode 100644 index 0000000..29d45cd --- /dev/null +++ b/src/service/vars.rs @@ -0,0 +1,210 @@ +use minijinja::Environment; +use serde::{Deserialize, Serialize}; +use std::{collections::HashMap, path::Path, path::PathBuf}; + +use crate::secrets; + +fn default_gcp_secret_version() -> String { + "latest".to_string() +} + +#[derive(Debug, serde::Deserialize, Clone)] +#[serde(tag = "source", rename_all = "snake_case")] +enum Secret { + Google { + name: String, + #[serde(default = "default_gcp_secret_version")] + version: String, + }, +} + +#[derive(Debug, serde::Deserialize, Clone)] +#[serde(untagged)] +enum ServiceVarEntry { + Raw(String), + Secret(Secret), +} + +#[derive(Debug, Default, Clone)] +pub struct ServiceVars { + inner: HashMap, +} + +impl<'de> Deserialize<'de> for ServiceVars { + fn deserialize(deserializer: D) -> Result + where + D: serde::Deserializer<'de>, + { + let inner = HashMap::deserialize(deserializer)?; + Ok(Self { inner }) + } +} + +impl ServiceVars { + pub async fn materialize(self) -> ServiceVarsMaterialized { + let mut inner = HashMap::new(); + for (key, entry) in self.inner { + let value = match entry { + ServiceVarEntry::Raw(s) => s, + ServiceVarEntry::Secret(secret) => match secret { + Secret::Google { name, version } => { + secrets::gcp::fetch_secret(&name, &version).await + } + }, + }; + inner.insert(key, value); + } + ServiceVarsMaterialized { inner } + } +} + +#[derive(Debug, Default, Clone)] +pub struct ServiceVarsMaterialized { + inner: HashMap, +} + +impl ServiceVarsMaterialized { + pub async fn try_init() -> Result { + let vars_raw = ServiceVars::try_init()?; + Ok(vars_raw.materialize().await) + } +} + +impl Serialize for ServiceVarsMaterialized { + fn serialize(&self, serializer: S) -> Result + where + S: serde::Serializer, + { + self.inner.serialize(serializer) + } +} + +/// Files that match dispenser.vars | *.dispenser.vars +/// Sorted +fn list_vars_files() -> Vec { + let mut files = Vec::new(); + let cli_args = crate::cli::get_cli_args(); + + let search_dir = cli_args.config.parent().map_or(Path::new("."), |p| { + if p.as_os_str().is_empty() { + Path::new(".") + } else { + p + } + }); + if let Ok(entries) = std::fs::read_dir(search_dir) { + for entry in entries.filter_map(|e| e.ok()) { + let path = entry.path(); + if path.is_file() { + if let Some(file_name) = path.file_name().and_then(|s| s.to_str()) { + if file_name == "dispenser.vars" || file_name.ends_with(".dispenser.vars") { + files.push(path); + } + } + } + } + } + + files.sort(); // Sort the paths alphabetically + files +} + +impl ServiceVars { + fn try_init_from_string(val: &str) -> Result { + Ok(toml::from_str(val)?) + } + + fn combine(vars: Vec) -> Self { + let mut combined_inner = HashMap::new(); + vars.into_iter().for_each(|var_set| { + combined_inner.extend(var_set.inner); + }); + Self { + inner: combined_inner, + } + } + + fn try_init() -> Result { + use std::io::Read; + let mut vars = Vec::new(); + let vars_files = list_vars_files(); + for vars_file in vars_files { + match std::fs::File::open(vars_file) { + Ok(mut file) => { + let mut this_vars = String::new(); + file.read_to_string(&mut this_vars)?; + match Self::try_init_from_string(&this_vars) { + Ok(this_vars) => vars.push(this_vars), + Err(e) => log::error!("Error parsing vars file: {e}"), + } + } + Err(e) => log::error!("Error reading vars file: {e}"), + } + } + + Ok(Self::combine(vars)) + } +} + +#[derive(Debug, thiserror::Error)] +pub enum ServiceConfigError { + #[error("IO error: {0}")] + Io(#[from] std::io::Error), + #[error("TOML error: {0}")] + Toml(#[from] toml::de::Error), + #[error("Templating error: {0:?}")] + Template((PathBuf, minijinja::Error)), +} + +pub fn render_template( + template_str: &str, + vars: &ServiceVarsMaterialized, +) -> Result { + let mut env = Environment::new(); + + let syntax = minijinja::syntax::SyntaxConfig::builder() + .variable_delimiters("${", "}") + .build() + .expect("This really should not fail. If this fail something has gone horribly wrong."); + + env.set_syntax(syntax); + env.set_undefined_behavior(minijinja::UndefinedBehavior::Strict); + + let template = env.template_from_str(template_str)?; + Ok(template.render(vars)?) +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_vars_parsing() { + let input = r#" + var1 = "value1" + var2 = "value2" + "#; + let vars = ServiceVars::try_init_from_string(input).expect("Failed to parse vars"); + let get_val = |k| match vars.inner.get(k) { + Some(ServiceVarEntry::Raw(s)) => Some(s.as_str()), + _ => None, + }; + assert_eq!(get_val("var1"), Some("value1")); + assert_eq!(get_val("var2"), Some("value2")); + } + + #[tokio::test] + async fn test_template_rendering() { + let mut inner = HashMap::new(); + inner.insert("base_path".to_string(), "/app".to_string()); + inner.insert("version".to_string(), "1.2.3".to_string()); + + let vars = ServiceVarsMaterialized { inner }; + + let template = "image: myapp:${ version }\npath: ${ base_path }/service"; + let rendered = render_template(template, &vars).expect("Failed to render"); + + assert!(rendered.contains("image: myapp:1.2.3")); + assert!(rendered.contains("path: /app/service")); + } +} diff --git a/src/signals.rs b/src/signals.rs index 1581885..bf13ba4 100644 --- a/src/signals.rs +++ b/src/signals.rs @@ -1,7 +1,5 @@ -use crate::config_file::DispenserConfigFile; -use crate::instance::Instances; -use crate::master::MasterMsg; -use futures_util::future; +use crate::service::manager::ServicesManager; +use crate::service::{file::EntrypointFile, manager::ServiceMangerConfig}; use signal_hook::{ consts::{SIGHUP, SIGINT}, iterator::Signals, @@ -10,6 +8,17 @@ use std::process::ExitCode; use std::sync::Arc; use tokio::sync::Mutex; +pub async fn remove_unused_services(old_manager: &ServicesManager, new_manager: &ServicesManager) { + let removed_services = old_manager + .service_names + .iter() + .filter(|s| !new_manager.service_names.contains(s)) + .cloned() + .collect(); + + old_manager.remove_containers(removed_services).await; +} + pub fn send_signal(signal: crate::cli::Signal) -> ExitCode { let pid_file = &crate::cli::get_cli_args().pid_file; @@ -40,91 +49,84 @@ pub fn send_signal(signal: crate::cli::Signal) -> ExitCode { /// What should we do when the user stops /// this program? -pub fn handle_sigint(instances: Arc>) { +pub fn handle_sigint(sigint_signal: Arc) { let mut signals = Signals::new([SIGINT]).expect("No signals :(. This really should never happen"); std::thread::spawn(move || { - signals.forever().for_each(|_| { - let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Stopping]); - // Check if there are any paths that were deleted - let current_instances = instances.blocking_lock().clone(); - - for curr_instance in ¤t_instances.inner { - curr_instance - .blocking_lock() - .master - .send_msg(MasterMsg::Stop); - } - - // Wait until all current instances are stopped or detached - loop { - if current_instances - .inner - .iter() - .all(|inst| inst.blocking_lock().master.is_stopped()) - { - let _ = std::fs::remove_file(&crate::cli::get_cli_args().pid_file); - std::process::exit(0); - } - } - }); + for _ in signals.forever() { + log::info!("Shutdown signal received"); + sigint_signal.notify_one(); + } }); } +pub async fn sigint_manager( + manager_holder: Arc>>, +) -> Result<(), String> { + let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Stopping]); + + log::info!("Shutting down..."); -pub fn handle_reload(instances: Arc>, rt_handle: tokio::runtime::Handle) { + let manager = manager_holder.lock().await; + manager.cancel().await; + manager.shutdown().await; + Ok(()) +} + +pub fn handle_reload(reload_signal: Arc) { let mut signals = Signals::new([SIGHUP]).expect("No signals :("); std::thread::spawn(move || { for _ in signals.forever() { - let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Reloading]); - let instances = Arc::clone(&instances); - rt_handle.block_on(async move { - // Read the config again - match DispenserConfigFile::try_init().await { - Ok(new_config) => { - let new_config = DispenserConfigFile::into_config(new_config).await; - // Check if there are any paths that were deleted - let current_instances = instances.lock().await.clone(); - - for curr_instance in ¤t_instances.inner { - let curr_instance = curr_instance.lock().await; - // Is the new config does not include the current instance we - // send a message to stop - if !new_config - .instances - .iter() - .any(|inst| inst.path == curr_instance.config.path) - { - curr_instance.master.send_msg(MasterMsg::Stop); - } else { - curr_instance.master.send_msg(MasterMsg::Detach); - } - } - - // Wait until all current instances are stopped or detached - loop { - let is_stopped_futures = current_instances - .inner - .iter() - .map(|inst| async { inst.lock().await.master.is_stopped() }); - let all_stopped = future::join_all(is_stopped_futures) - .await - .into_iter() - .all(|s| s); - if all_stopped { - break; - } - } - - let mut instances = instances.lock().await; - *instances = new_config.get_instances().await; - } - Err(err) => log::error!("Unable to read new config: {err}"), - }; - }); + log::info!("Reload signal received"); + reload_signal.notify_one(); + } + }); +} + +pub async fn reload_manager( + manager_holder: Arc>>, +) -> Result<(), String> { + let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Reloading]); + + log::info!("Reloading configuration..."); + // Load the new configuration + let service_manager_config = match ServiceMangerConfig::try_init().await { + Ok(entrypoint_file) => entrypoint_file, + Err(e) => { + log::error!("Failed to reload entrypoint file: {e:?}"); let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Ready]); + return Err(format!("Failed to reload entrypoint file: {e:?}")); } - }); + }; + + // Create a new manager with the new configuration + let new_manager = match ServicesManager::from_config(service_manager_config).await { + Ok(manager) => Arc::new(manager), + Err(e) => { + log::error!("Failed to create new services manager: {e}"); + let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Ready]); + return Err(format!("Failed to create new services manager: {e}")); + } + }; + + log::info!("New configuration loaded successfully"); + + // Cancel the old manager + let old_manager = { + let mut holder = manager_holder.lock().await; + let old = holder.clone(); + *holder = Arc::clone(&new_manager); + old + }; + + log::info!("Canceling old manager..."); + old_manager.cancel().await; + remove_unused_services(&old_manager, &new_manager).await; + + let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Ready]); + log::info!("Reload complete"); + + Ok(()) }