diff --git a/dev/DEV-GUIDE.md b/dev/DEV-GUIDE.md new file mode 100644 index 0000000000..4c4c072846 --- /dev/null +++ b/dev/DEV-GUIDE.md @@ -0,0 +1,168 @@ +# authd Development Environment + +LXD VM for daily authd development. Provides full systemd, D-Bus, SSH, and GDM — +enough to build, test, and exercise real PAM/NSS login flows without touching your +host system. The host source tree is bind-mounted at `/workspace/authd`. + +## Prerequisites + +- LXD: `sudo snap install lxd && lxd init --auto && sudo usermod -aG lxd "$USER"` (logout/login) +- SSH key: `ls ~/.ssh/id_ed25519.pub || ssh-keygen -t ed25519` +- SPICE viewer (only needed for `lxc console --type=vga`): `sudo apt install virt-viewer` + +The SSH key is injected into the VM for key-based authentication, VS Code Remote SSH, +and testing PAM login flows over SSH (`ssh user@domain.com@vm-ip`). + +## Quick Start + +```bash +./dev/dev-env.sh up # Create VM + build authd (~20 min first time) +./dev/dev-env.sh broker google \ + --client-id ID --client-secret SEC \ + --ssh-suffixes '@gmail.com' # Configure broker +ssh you@gmail.com@$(./dev/dev-env.sh ip) # Test login from host +``` + +The `up` command provisions the VM, installs all toolchains (Go, Rust, protoc), +and automatically builds + installs authd, PAM, and NSS modules. After `up` +completes, the only manual step is configuring a broker with your IdP credentials. + +## Commands + +Global flags (before the subcommand, apply to all commands): +`--name NAME` (default: authd-dev), `--release NAME` (default: noble), `--workspace PATH` (default: /workspace/authd). + +| Command | Description | +|---------|-------------| +| `up` | Create and provision VM (also restarts a stopped VM) | +| `stop` | Stop VM (preserves state; restart with `up`) | +| `down [--force]` | Stop and delete VM and profile | +| `shell` | Direct shell via `lxc exec` (no PAM, always works) | +| `ssh` | Connect via SSH (goes through PAM — for login testing) | +| `status` | VM status and snapshots | +| `snapshot ` / `restore ` | Manage snapshots | +| `broker [opts]` | Configure credentials + install broker; `--rebuild` to recompile without touching credentials; `edit` subaction to open broker.conf | +| `build [component]` | Fast rebuild + install a single component | +| `validate` | Health-check authd stack (socket, PAM, NSS, brokers) | +| `test [args]` | Run tests inside the VM (e.g. `--update-golden`, `--skip-external`) | +| `logs [target]` | Tail logs (authd/google/msentraid/oidc/cloud-init) | +| `exec ` | Run a command inside the VM (in workspace dir) | +| `ip` | Print VM IP | + +## Iterative Development + +After the initial `install-authd`, use `build` from the **host**: + +```bash +./dev/dev-env.sh build authd # Daemon + proto regen + restart +./dev/dev-env.sh build pam # PAM modules (reconnect SSH to load) +./dev/dev-env.sh build nss # NSS module + ldconfig +./dev/dev-env.sh build broker google # Broker binary + restart +./dev/dev-env.sh build all # Full install-authd +``` + +To run tests inside the VM: + +```bash +./dev/dev-env.sh test # All tests with race detection (default) +./dev/dev-env.sh test ./internal/brokers/... # Specific package +./dev/dev-env.sh test --update-golden # Auto-update golden files +./dev/dev-env.sh test --skip-external # Skip VHS (requires external tools) +``` + +Tail service logs from the host: + +```bash +./dev/dev-env.sh logs authd # authd daemon logs +./dev/dev-env.sh logs google # Google broker logs +./dev/dev-env.sh logs cloud-init # Cloud-init provisioning log +``` + +## Broker Setup + +```bash +# Google (--issuer defaults to accounts.google.com) +./dev/dev-env.sh broker google \ + --client-id 843411...googleusercontent.com \ + --client-secret GOCSPX-... \ + --ssh-suffixes '@gmail.com' + +# Microsoft Entra ID +./dev/dev-env.sh broker msentraid \ + --issuer https://login.microsoftonline.com/TENANT_ID/v2.0 \ + --client-id CLIENT_ID \ + --ssh-suffixes '@yourdomain.com' + +# Generic OIDC (Keycloak, etc.) +./dev/dev-env.sh broker oidc \ + --issuer https://keycloak.example.com/realms/myrealm \ + --client-id authd-client --client-secret SECRET \ + --ssh-suffixes '*' +``` + +The `broker` command patches credentials, enables, and restarts the service automatically. +Add `--rebuild` to recompile from source (e.g. after pulling broker changes). +Use `broker edit` to open `broker.conf` directly in your editor. + +## Validation + +After installing authd (and optionally a broker), verify the full stack: + +```bash +./dev/dev-env.sh validate +``` + +Checks: toolchain, authd.socket, PAM module, NSS config, broker registration. + +## Troubleshooting + +| Problem | Check | +|---------|-------| +| VM won't start | `lxc info authd-dev` | +| Provisioning failed | `./dev/dev-env.sh logs cloud-init` or `lxc exec authd-dev -- cloud-init status` | +| Build failed during `up` | `./dev/dev-env.sh exec ./dev/scripts/install-authd` to retry | +| Wrong Go version | `which go` should be `/usr/local/go/bin/go`, not `/usr/bin/go` | +| authd won't start | `sudo systemctl status authd.socket` (authd uses socket activation) | +| SSH login rejects user | `ssh_allowed_suffixes_first_auth` must be set in broker.conf | +| GDM console login password | `cat ~/.config//vm-password` (default: `~/.config/authd-dev/vm-password`) | +| VGA console freezes immediately | See [Known issue: VGA console on HiDPI](#known-issues) below | + +**Recovery from failed provisioning:** `./dev/dev-env.sh down --force && ./dev/dev-env.sh up` + +**Re-running install scripts:** `./dev/dev-env.sh exec ./dev/scripts/install-authd` is safe to re-run (idempotent config, rebuilds binaries). +`broker` auto-detects whether a binary exists: if it does, only credentials are patched (no rebuild). Pass `--rebuild` to recompile from source. + +**Snapshots:** `up` creates two snapshots: `clean` (toolchain only, pre-build) and `installed` (authd + PAM + NSS built; no broker credentials — run `./dev/dev-env.sh broker ` to configure). +Use `./dev/dev-env.sh restore installed` to reset to a freshly built state. + +**Go/Rust versions** are auto-synced from `go.mod` and `authd-oidc-brokers/rust-toolchain.toml` +at VM creation time. No manual version management needed. + +## GDM / VGA Console + +The VM runs GDM at `graphical.target` so you can test real GDM login flows. + +**Accessing the GDM screen:** +```bash +lxc console authd-dev --type=vga # Opens SPICE viewer (requires virt-viewer on host) +``` + +The default login password is random-generated at VM creation time: +```bash +cat ~/.config/authd-dev/vm-password # default VM name +cat ~/.config//vm-password # custom --name +``` + +## File Layout + +``` +dev/ +├── dev-env.sh # VM lifecycle + build (run from host) +├── cloud-init.yaml # VM provisioning template (Go, Rust, deps, GDM) +├── DEV-GUIDE.md # This file +├── lib/ +│ └── common.sh # Shared helpers (output, build, variant config) +└── scripts/ + ├── install-authd # Build + install authd + PAM + NSS + configure (verbose mode) + └── install-broker # Configure or build+install OIDC broker + D-Bus + systemd +``` diff --git a/dev/cloud-init.yaml b/dev/cloud-init.yaml new file mode 100644 index 0000000000..f20cb7240c --- /dev/null +++ b/dev/cloud-init.yaml @@ -0,0 +1,197 @@ +#cloud-config +# authd Development Environment - Cloud-Init Configuration +# +# Single cloud-init user-data for the LXD VM dev environment. +# Installs all build/test dependencies, Go, Rust, and a minimal GNOME desktop +# for GDM login testing. +# +# Placeholders replaced at launch time by dev-env.sh: +# __SSH_PUBLIC_KEY__ - your SSH public key +# __GO_VERSION__ - Go toolchain version (from go.mod) +# __RUST_CHANNEL__ - Rust toolchain channel (from rust-toolchain.toml) +# __VM_PASSWORD__ - randomly generated VM password + +ssh_authorized_keys: + - __SSH_PUBLIC_KEY__ + +package_update: true +# Intentionally false (unlike e2e cloud-init which uses true) to speed up +# VM provisioning. Dev VMs are short-lived and don't need full upgrades. +package_upgrade: false + +packages: + # ── Build dependencies (from debian/control) ── + - protobuf-compiler + - pkgconf + - gcc + - make + - git + - libpam0g-dev + - libglib2.0-dev + - libpwquality-dev + - libc6-dev + - libssl-dev + - systemd-dev + # ── Test dependencies (from .github/workflows/qa.yaml) ── + - bubblewrap + - cracklib-runtime + - dbus + - openssh-server + - uidmap + - openssh-client + - apparmor-profiles + - gdm3 + - gnome-shell + - spice-vdagent + # ── Dev ergonomics ── + - shellcheck # Shell script linting (used in CI qa.yaml) + # ── Runtime / Utilities ── + - systemd-timesyncd + - curl + - wget + - jq + - vim + - tree + - ripgrep + - htop + - software-properties-common + +write_files: + # SSH config for authd PAM testing (matches e2e-tests/vm/cloud-init-template) + - path: /etc/ssh/sshd_config.d/authd.conf + owner: root:root + permissions: "0644" + content: | + UsePAM yes + Match User *@* + KbdInteractiveAuthentication yes + + # PATH setup for Go and Rust toolchains + - path: /etc/profile.d/authd-dev.sh + owner: root:root + permissions: "0644" + content: | + # Go (installed to /usr/local/go from go.dev) + export PATH="/usr/local/go/bin:$HOME/go/bin:$PATH" + # Rust (rustup, installed per-user) + if [ -d "$HOME/.cargo/bin" ]; then + export PATH="$HOME/.cargo/bin:$PATH" + fi + + # Suppress login banner noise (from e2e-tests/vm/cloud-init-template) + - path: /etc/skel/.hushlogin + content: "" + + # dconf profile for the desktop session + - path: /etc/dconf/profile/user + content: | + user-db:user + system-db:local + + # Prevent the desktop from suspending or blanking the screen + # (from e2e-tests/vm/cloud-init-template-questing.yaml) + - path: /etc/dconf/db/local.d/00-authd-dev + content: | + [org/gnome/settings-daemon/plugins/power] + sleep-inactive-ac-type='nothing' + sleep-inactive-ac-timeout=0 + + [org/gnome/desktop/session] + idle-delay=uint32 0 + +runcmd: + # --- Configure swap (4GB) to prevent OOM during large Go builds --- + # msgraph-sdk-go/models requires >3GB during compilation; 4GB swap provides insurance. + - | + set -e + echo "Configuring 4GB swap file..." + fallocate -l 4G /swapfile + chmod 600 /swapfile + mkswap /swapfile + swapon /swapfile + echo '/swapfile none swap sw 0 0' >> /etc/fstab + echo 'vm.swappiness=10' >> /etc/sysctl.d/99-swap.conf + sysctl -p /etc/sysctl.d/99-swap.conf 2>/dev/null | grep swappiness || true + echo "Swap configured (4GB, swappiness=10 to prefer memory)" + + # --- Install Go from go.dev --- + # System Go is too old for authd; version is injected from go.mod by dev-env.sh. + - | + set -e + GO_VERSION="__GO_VERSION__" + ARCH=$(dpkg --print-architecture) + case "$ARCH" in + amd64) GO_ARCH="amd64" ;; + arm64) GO_ARCH="arm64" ;; + *) echo "Unsupported architecture: $ARCH"; exit 1 ;; + esac + echo "Installing Go ${GO_VERSION} (${GO_ARCH})..." + wget -q "https://go.dev/dl/go${GO_VERSION}.linux-${GO_ARCH}.tar.gz" -O /tmp/go.tar.gz + # Verify download succeeded and is non-empty + if [ ! -s /tmp/go.tar.gz ]; then + echo "ERROR: Go download failed or is empty"; exit 1 + fi + rm -rf /usr/local/go + tar -C /usr/local -xzf /tmp/go.tar.gz + rm -f /tmp/go.tar.gz + echo "Go installed: $(/usr/local/go/bin/go version)" + + # --- Install Rust via rustup (for ubuntu user) --- + # RUSTUP_INIT_SKIP_PATH_CHECK: required because dh-cargo may pull in + # system Rust, which causes rustup to error. + # Toolchain channel is injected from rust-toolchain.toml by dev-env.sh. + - | + set -e + su - ubuntu -c 'export RUSTUP_INIT_SKIP_PATH_CHECK=yes && curl --proto "=https" --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain __RUST_CHANNEL__' + su - ubuntu -c '. "$HOME/.cargo/env" && rustup default __RUST_CHANNEL__ && echo "Rust: $(rustc --version), $(cargo --version)"' + + # --- Set PATH in ubuntu's .bashrc for non-interactive SSH sessions --- + # /etc/profile.d only applies to login shells. Non-interactive SSH commands + # (ssh vm 'go build ...') need PATH set in .bashrc. + - | + cat >> /home/ubuntu/.bashrc <<'BASHRC' + + # authd dev environment PATH (added by cloud-init) + export PATH="/usr/local/go/bin:$HOME/go/bin:$HOME/.cargo/bin:$PATH" + BASHRC + + # --- Install Go protobuf tools (mirrors CONTRIBUTING.md method) --- + - | + su - ubuntu -c ' + export PATH="/usr/local/go/bin:$HOME/go/bin:$HOME/.cargo/bin:$PATH" + cd /workspace/authd/tools + grep -o "_ \"[^\"]*\"" *.go | cut -d "\"" -f 2 | sort -u | xargs -r go install + ' + echo "Go protobuf tools installed" + + # --- AppArmor bubblewrap profile (required by tests) --- + - | + if [ -f /usr/share/apparmor/extra-profiles/bwrap-userns-restrict ]; then + ln -sf /usr/share/apparmor/extra-profiles/bwrap-userns-restrict /etc/apparmor.d/ + apparmor_parser -r /etc/apparmor.d/bwrap-userns-restrict 2>/dev/null || true + echo "AppArmor bubblewrap profile loaded" + fi + + # --- System hardening for dev comfort --- + - sed -i 's/^LOGIN_TIMEOUT.*/LOGIN_TIMEOUT 360/' /etc/login.defs + - systemctl mask systemd-networkd-wait-online.service || true + - systemctl mask apt-daily.service apt-daily.timer || true + - systemctl mask apt-daily-upgrade.service apt-daily-upgrade.timer || true + - systemctl mask unattended-upgrades.service || true + - rm -f /etc/apt/apt.conf.d/20auto-upgrades /etc/apt/apt.conf.d/50unattended-upgrades + + # --- Desktop configuration --- + # graphical.target enables GDM for desktop login testing. + # SSH still works — GDM does not block SSH access. + - systemctl set-default graphical.target + - dconf update + - sed -i 's/enabled=1/enabled=0/' /etc/default/apport 2>/dev/null || true + - systemctl mask --now apport.service || true + # Password for GDM console login (SSH uses key auth). + # Generated randomly by dev-env.sh at VM creation time. + - echo 'ubuntu:__VM_PASSWORD__' | chpasswd + + # --- Restart SSH to pick up authd PAM config --- + - systemctl restart ssh + + - echo "=== authd development environment provisioning complete ===" diff --git a/dev/dev-env.sh b/dev/dev-env.sh new file mode 100755 index 0000000000..676cf2708a --- /dev/null +++ b/dev/dev-env.sh @@ -0,0 +1,843 @@ +#!/usr/bin/env bash +# authd Development Environment Manager +# +# Creates and manages an LXD VM for authd development. +# The VM runs full systemd + D-Bus + SSH + GDM, with the host source +# tree bind-mounted for live editing. Suitable for building, testing, +# and integration testing (SSH, TTY, and GDM login via PAM/NSS). +# +# Usage: ./dev/dev-env.sh [options] +# Run './dev/dev-env.sh help' for details. + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +source "${SCRIPT_DIR}/lib/common.sh" + +PROJECT_DIR="$(dirname "$SCRIPT_DIR")" + +# Defaults (overridable via global flags — see 'help') +CONTAINER_NAME="authd-dev" +RELEASE="noble" +PROFILE_NAME="${CONTAINER_NAME}" +WORKSPACE_PATH="/workspace/authd" + +# --- Helpers --- + +container_exists() { + lxc info "$CONTAINER_NAME" &>/dev/null +} + +container_running() { + local state + state=$(get_container_status) + [[ "$state" == "RUNNING" || "$state" == "Running" ]] +} + +get_container_status() { + lxc info "$CONTAINER_NAME" 2>/dev/null | awk '/^Status:/ {print $2}' +} + +get_container_ip() { + lxc list "$CONTAINER_NAME" --format csv -c4 2>/dev/null \ + | grep -oE '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | head -1 +} + +wait_for_ip() { + local max_wait=60 waited=0 + info "Waiting for VM network..." >&2 + while [[ $waited -lt $max_wait ]]; do + local ip + ip=$(get_container_ip) + if [[ -n "$ip" ]]; then + echo "$ip" + return 0 + fi + sleep 2 + waited=$((waited + 2)) + done + die "Timed out waiting for VM IP address" +} + +# Run a command as the ubuntu user inside the VM. +exec_in_vm() { + lxc exec "$CONTAINER_NAME" -- su -l ubuntu -c "$1" +} + +# --- LXD Profile --- + +ensure_profile() { + if lxc profile show "$PROFILE_NAME" &>/dev/null; then + info "Updating LXD profile '${PROFILE_NAME}'..." + else + info "Creating LXD profile '${PROFILE_NAME}'..." + lxc profile create "$PROFILE_NAME" + fi + + cat < "1.25.8"). +get_go_version() { + local toolchain_line + toolchain_line=$(grep '^toolchain go' "${PROJECT_DIR}/go.mod" 2>/dev/null || true) + if [[ -n "$toolchain_line" ]]; then + echo "${toolchain_line#toolchain go}" + return 0 + fi + # Fallback to "go X.Y.Z" directive + local go_line + go_line=$(grep '^go ' "${PROJECT_DIR}/go.mod" 2>/dev/null | head -1) + [[ -n "$go_line" ]] || die "Cannot determine Go version from go.mod" + echo "$go_line" | awk '{print $2}' +} + +# Extract Rust channel from rust-toolchain.toml (e.g., "1.94.0"). +get_rust_channel() { + local toml="${PROJECT_DIR}/authd-oidc-brokers/rust-toolchain.toml" + if [[ -f "$toml" ]]; then + local channel + channel=$(sed -n 's/^channel = "\(.*\)"/\1/p' "$toml" | head -1) + if [[ -n "$channel" ]]; then + echo "$channel" + return 0 + fi + fi + echo "stable" +} + +generate_cloud_init() { + local ssh_key_file ssh_key go_version rust_channel vm_password + ssh_key_file=$(detect_ssh_key) + ssh_key=$(cat "$ssh_key_file") + go_version=$(get_go_version) + rust_channel=$(get_rust_channel) + vm_password=$(head -c 16 /dev/urandom | base64 | tr -dc 'a-zA-Z0-9' | head -c 16) + + # Save the VM password to a file for later retrieval, rather than only + # printing it to the terminal where it persists in scrollback history. + local pw_dir="${HOME}/.config/${CONTAINER_NAME}" + mkdir -p "$pw_dir" + printf '%s\n' "$vm_password" > "${pw_dir}/vm-password" + chmod 600 "${pw_dir}/vm-password" + + info "Using SSH key: ${ssh_key_file}" >&2 + info "Go version: ${go_version} (from go.mod)" >&2 + info "Rust channel: ${rust_channel} (from rust-toolchain.toml)" >&2 + info "VM password: saved to ${pw_dir}/vm-password (for GDM console login)" >&2 + + # Use awk for safe placeholder replacement (no sed metacharacter issues). + # Sanitize values for awk gsub: '&' means "matched text" and '\' is + # an escape in replacement strings, so they must be escaped first. + ssh_key=$(printf '%s' "$ssh_key" | sed -e 's/[\&]/\\&/g') + + awk \ + -v ssh_key="$ssh_key" \ + -v go_ver="$go_version" \ + -v rust_ch="$rust_channel" \ + -v vm_pw="$vm_password" \ + '{ + gsub(/__SSH_PUBLIC_KEY__/, ssh_key) + gsub(/__GO_VERSION__/, go_ver) + gsub(/__RUST_CHANNEL__/, rust_ch) + gsub(/__VM_PASSWORD__/, vm_pw) + print + }' "${SCRIPT_DIR}/cloud-init.yaml" +} + +# --- Commands --- + +cmd_up() { + # Reject any unknown arguments (global flags are parsed before dispatch) + if [[ $# -gt 0 ]]; then + die "Unknown argument(s): $*. Global flags (--name, --release, --workspace) must come before the subcommand." + fi + + # Preflight + command -v lxc &>/dev/null || die "LXD not installed. Install: sudo snap install lxd && lxd init --auto" + [[ -f "${SCRIPT_DIR}/cloud-init.yaml" ]] || die "Missing ${SCRIPT_DIR}/cloud-init.yaml" + + if container_exists; then + if container_running; then + ok "VM '${CONTAINER_NAME}' is already running" + info "IP: $(get_container_ip)" + info "Connect: ./dev/dev-env.sh shell or ./dev/dev-env.sh ssh" + info "Run cmd: ./dev/dev-env.sh exec " + info "Logs: ./dev/dev-env.sh logs authd" + return 0 + else + info "Starting existing VM '${CONTAINER_NAME}'..." + lxc start "$CONTAINER_NAME" + local ip + ip=$(wait_for_ip) + ok "VM started — IP: ${ip}" + return 0 + fi + fi + + printf '\n%s\n' "${BOLD}Creating authd development environment${NC}" + echo " VM: ${CONTAINER_NAME}" + echo " Image: ubuntu:${RELEASE}" + echo " Source: ${PROJECT_DIR} → ${WORKSPACE_PATH}" + echo "" + + # 1. Create LXD profile + ensure_profile + + # 2. Generate cloud-init with SSH key + local cloud_init + cloud_init=$(generate_cloud_init) + + # 3. Initialize VM (don't start yet — need to set cloud-init first) + info "Initializing VM from ubuntu:${RELEASE}..." + lxc init "ubuntu:${RELEASE}" "$CONTAINER_NAME" --vm \ + --profile default \ + --profile "$PROFILE_NAME" + + # 3a. Resize root disk before starting — the default 10GiB is far too small. + # Space needed: ~2GB base + ~600MB Go + ~2GB Rust + ~400MB GDM + ~8GB build/module caches. + # 40GiB provides comfortable headroom for iterative builds and snapshots. + info "Sizing root disk to 40GiB" + lxc config device override "$CONTAINER_NAME" root size=40GiB + ok "Root disk sized to 40GiB (swap space enabled to prevent OOM during large builds)" + + # 4. Inject cloud-init user-data + info "Applying cloud-init configuration..." + lxc config set "$CONTAINER_NAME" user.user-data - <<< "$cloud_init" + + # 5. Start + info "Starting VM..." + lxc start "$CONTAINER_NAME" + + # 6. Wait for network + local ip + ip=$(wait_for_ip) + ok "VM started — IP: ${ip}" + + # 7. Wait for cloud-init provisioning + info "Waiting for cloud-init provisioning (takes 5-15 min on first run)..." + echo " Tail logs: lxc exec ${CONTAINER_NAME} -- tail -f /var/log/cloud-init-output.log" + echo "" + if lxc exec "$CONTAINER_NAME" -- cloud-init status --wait 2>/dev/null; then + ok "Cloud-init provisioning complete" + else + warn "Cloud-init finished with errors" + warn "Check: lxc exec ${CONTAINER_NAME} -- cat /var/log/cloud-init-output.log" + fi + + # 8. Verify installed tools + echo "" + info "Verifying toolchain:" + # shellcheck disable=SC2016 # Single quotes intentional: commands run inside the VM + { + lxc exec "$CONTAINER_NAME" -- su -l ubuntu -c \ + 'echo " Go: $(go version 2>/dev/null || echo NOT FOUND)"' + lxc exec "$CONTAINER_NAME" -- su -l ubuntu -c \ + 'echo " Rust: $(rustc --version 2>/dev/null || echo NOT FOUND)"' + lxc exec "$CONTAINER_NAME" -- su -l ubuntu -c \ + 'echo " Cargo: $(cargo --version 2>/dev/null || echo NOT FOUND)"' + lxc exec "$CONTAINER_NAME" -- su -l ubuntu -c \ + 'echo " Protoc: $(protoc --version 2>/dev/null || echo NOT FOUND)"' + } + + # 9. Create 'clean' snapshot (toolchain only, before build) + echo "" + info "Creating 'clean' snapshot..." + lxc snapshot "$CONTAINER_NAME" clean + ok "Snapshot 'clean' created (toolchain only — before authd build)" + + # 10. Build and install authd (PAM, NSS, systemd config) + echo "" + info "Building and installing authd (daemon, PAM, NSS)..." + echo " This mirrors the steps in CONTRIBUTING.md and debian/install." + echo "" + if lxc exec "$CONTAINER_NAME" -- su -l ubuntu -c "cd ${WORKSPACE_PATH} && ./dev/scripts/install-authd --all"; then + ok "authd built and installed" + + # Create 'installed' snapshot with fully working authd stack + echo "" + info "Creating 'installed' snapshot..." + lxc snapshot "$CONTAINER_NAME" installed + ok "Snapshot 'installed' created" + else + warn "authd build failed — the VM is still usable with toolchain installed" + warn "Debug: ./dev/dev-env.sh logs cloud-init" + warn "Retry: ./dev/dev-env.sh exec ./dev/scripts/install-authd --all" + fi + + # Print summary + echo "" + printf '%s\n' "${GREEN}${BOLD} Development environment ready!${NC}" + echo "" + printf '%s\n' "${BOLD}Activate a broker:${NC}" + echo " Google: ./dev/dev-env.sh broker google --client-id ID --client-secret SEC" + echo " Entra ID: ./dev/dev-env.sh broker msentraid --issuer URL --client-id ID" + echo " Edit conf: ./dev/dev-env.sh broker google conf # Edit /etc/authd-/broker.conf and restart" + echo "" + printf '%s\n' "${BOLD}Development:${NC}" + echo " ./dev/dev-env.sh test # Run all tests with race detection" + echo " ./dev/dev-env.sh build authd # Rebuild daemon (fast)" + echo " ./dev/dev-env.sh build pam # Rebuild PAM modules" + echo " ./dev/dev-env.sh build all # Full reinstall" + echo " ./dev/dev-env.sh logs authd # Tail authd logs" + echo "" + printf '%s\n' "${BOLD}Connect:${NC}" + echo " ./dev/dev-env.sh shell # Direct shell (no PAM, always works)" + echo " ./dev/dev-env.sh ssh # SSH (goes through PAM — for login testing)" + echo "" + printf '%s\n' "${BOLD}Snapshots:${NC}" + echo " ./dev/dev-env.sh restore clean # Reset to toolchain only (pre-build)" + echo " ./dev/dev-env.sh restore installed # Reset to freshly built authd" + echo " ./dev/dev-env.sh snapshot # Save current state" + echo "" +} + +cmd_down() { + local force=false + [[ "${1:-}" == "--force" || "${1:-}" == "-f" ]] && force=true + + if ! container_exists; then + warn "VM '${CONTAINER_NAME}' does not exist" + return 0 + fi + + if $force; then + info "Force-removing VM '${CONTAINER_NAME}'..." + lxc delete "$CONTAINER_NAME" --force 2>/dev/null || true + else + if container_running; then + info "Stopping VM '${CONTAINER_NAME}'..." + lxc stop "$CONTAINER_NAME" + fi + info "Deleting VM '${CONTAINER_NAME}'..." + lxc delete "$CONTAINER_NAME" + fi + + # Clean up profile + if lxc profile show "$PROFILE_NAME" &>/dev/null; then + lxc profile delete "$PROFILE_NAME" 2>/dev/null || true + fi + + ok "VM and profile removed" +} + +cmd_stop() { + if ! container_exists; then + warn "VM '${CONTAINER_NAME}' does not exist" + return 0 + fi + if ! container_running; then + ok "VM '${CONTAINER_NAME}' is already stopped" + return 0 + fi + info "Stopping VM '${CONTAINER_NAME}'..." + lxc stop "$CONTAINER_NAME" + ok "VM stopped — run './dev/dev-env.sh up' to restart" +} + +cmd_shell() { + container_running || die "VM '${CONTAINER_NAME}' is not running. Run: ./dev/dev-env.sh up" + exec lxc exec -t "$CONTAINER_NAME" -- bash -l -c "PROMPT_COMMAND='PS1=\"\[\e[32;1m\][${CONTAINER_NAME} | Shell (No PAM)]\[\e[0m\] \u@\h:\w$ \"' exec su - ubuntu" +} + +cmd_ssh() { + container_running || die "VM '${CONTAINER_NAME}' is not running. Run: ./dev/dev-env.sh up" + local ip + ip=$(get_container_ip) + [[ -n "$ip" ]] || die "Cannot determine VM IP" + + local ssh_key_private + ssh_key_private=$(get_ssh_private_key) + + info "Note: Using SSH goes through PAM modules for integration testing." + exec ssh -i "$ssh_key_private" \ + -o StrictHostKeyChecking=no \ + -o UserKnownHostsFile=/dev/null \ + -o LogLevel=ERROR \ + "ubuntu@${ip}" +} + +cmd_status() { + if [[ "${1:-}" == "--deep" ]]; then + cmd_validate + return 0 + fi + if ! container_exists; then + echo "VM '${CONTAINER_NAME}': not created" + echo " Run: ./dev/dev-env.sh up" + return 0 + fi + + printf '%s\n' "${BOLD}VM: ${CONTAINER_NAME}${NC}" + local state + state=$(get_container_status) + echo " Status: ${state}" + + if [[ "$state" == "RUNNING" || "$state" == "Running" ]]; then + local ip + ip=$(get_container_ip) + echo " IP: ${ip}" + fi + + echo "" + printf '%s\n' "${BOLD}Snapshots:${NC}" + local snapshots + snapshots=$(lxc info "$CONTAINER_NAME" 2>/dev/null | awk '/^Snapshots:/,0' | tail -n +2) + if [[ -z "$snapshots" || "$snapshots" == *"Snapshots: []"* ]]; then + echo " (none)" + else + while IFS= read -r line; do printf ' %s\n' "$line"; done <<< "$snapshots" + fi +} + +cmd_snapshot() { + local name="${1:-}" + [[ -n "$name" ]] || die "Usage: ./dev/dev-env.sh snapshot " + container_exists || die "VM '${CONTAINER_NAME}' does not exist" + + info "Creating snapshot '${name}'..." + lxc snapshot "$CONTAINER_NAME" "$name" + ok "Snapshot '${name}' created" +} + +cmd_restore() { + local name="${1:-}" + [[ -n "$name" ]] || die "Usage: ./dev/dev-env.sh restore " + container_exists || die "VM '${CONTAINER_NAME}' does not exist" + + info "Restoring snapshot '${name}'..." + lxc restore "$CONTAINER_NAME" "$name" + + if ! container_running; then + info "Starting container..." + lxc start "$CONTAINER_NAME" + fi + + local ip + ip=$(wait_for_ip) + ok "Restored '${name}' — IP: ${ip}" +} + +cmd_broker() { + container_running || die "VM '${CONTAINER_NAME}' is not running. Run: ./dev/dev-env.sh up" + if [[ $# -lt 1 ]]; then + cat < [options] + ./dev/dev-env.sh broker edit # Open broker.conf in \$EDITOR and restart + +${BOLD}Variants:${NC} google, msentraid, oidc + +${BOLD}Credential options:${NC} + --client-id ID OAuth2 client ID + --client-secret SEC OAuth2 client secret + --issuer URL OIDC issuer (required for msentraid; defaults for google) + --ssh-suffixes LIST Domains allowed for first-time SSH login (e.g. '@gmail.com') + Default: '*' (any domain) — restrict in production + --allowed-users LIST OWNER (DEFAULT), ALL, or usernames + --rebuild Force full binary rebuild (skip for credential updates) + +${BOLD}Behaviour:${NC} + If the broker is already installed, only credentials are updated (no rebuild). + Pass --rebuild to recompile the binary after source changes. + +${BOLD}Examples:${NC} + ./dev/dev-env.sh broker google \\ + --client-id YOUR_ID --client-secret YOUR_SECRET + + ./dev/dev-env.sh broker msentraid \\ + --issuer https://login.microsoftonline.com/TENANT/v2.0 \\ + --client-id YOUR_ID + + ./dev/dev-env.sh broker google edit # Open /etc/authd-google/broker.conf and restart +EOF + return 0 + fi + + local variant="$1" + shift + + # Validate variant immediately so typos give a clear error before any lxc exec. + case "$variant" in + google|msentraid|oidc) ;; + *) die "Unknown variant: ${variant}. Use: google, msentraid, or oidc" ;; + esac + + if [[ "${1:-}" == "edit" ]]; then + local conf_file="/etc/authd-${variant}/broker.conf" + local check_err + if ! check_err=$(lxc exec "$CONTAINER_NAME" -- test -f "$conf_file" 2>&1); then + if echo "$check_err" | grep -qi "agent"; then + die "LXD VM agent is not ready — the VM may still be booting. Wait a moment and retry." + fi + die "Broker '${variant}' is not installed. Run: ./dev/dev-env.sh broker ${variant} --client-id ID --client-secret SEC" + fi + info "Opening ${conf_file}..." + lxc exec "$CONTAINER_NAME" -t -- bash -c "sudo \${EDITOR:-nano} \"$conf_file\"" + info "Restarting authd-${variant} broker service..." + if lxc exec "$CONTAINER_NAME" -- sudo systemctl restart "authd-${variant}"; then + ok "Broker restarted." + else + warn "Failed to restart broker. Check: ./dev/dev-env.sh logs ${variant}" + fi + return 0 + fi + + # ssh_allowed_suffixes_first_auth must be set in broker.conf for new users to + # log in via SSH for the first time. Default to '*' (any domain) when the dev + # doesn't specify --ssh-suffixes, since this is a dev environment and you want + # to be able to test SSH login without restricting to a specific domain. + local has_rebuild=false + local has_ssh_suffixes=false + for arg in "$@"; do + [[ "$arg" == "--rebuild" ]] && has_rebuild=true + [[ "$arg" == "--ssh-suffixes" ]] && has_ssh_suffixes=true + done + if ! $has_rebuild && ! $has_ssh_suffixes; then + info "No --ssh-suffixes given; defaulting to '*' (all domains allowed for first-time SSH)." + info "Pass --ssh-suffixes '@yourdomain.com' to restrict to a specific domain." + set -- "$@" --ssh-suffixes '*' + fi + + local args + args=$(printf ' %q' "$@") + + # Build and configure (install-broker auto-detects reconfigure vs full install) + lxc exec "$CONTAINER_NAME" -- su -l ubuntu -c \ + "cd ${WORKSPACE_PATH} && ./dev/scripts/install-broker ${variant}${args}" +} + +cmd_ip() { + container_running || die "VM '${CONTAINER_NAME}' is not running. Run: ./dev/dev-env.sh up" + get_container_ip +} + +cmd_validate() { + container_running || die "VM '${CONTAINER_NAME}' is not running. Run: ./dev/dev-env.sh up" + + local failures=0 + + printf '\n%s\n\n' "${BOLD}Validating authd stack in ${CONTAINER_NAME}${NC}" + + # Use array for safe command construction + _exec() { lxc exec "$CONTAINER_NAME" -- su -l ubuntu -c "$1" 2>/dev/null; } + _exec_root() { lxc exec "$CONTAINER_NAME" -- bash -lc "$1" 2>/dev/null; } + + # 1. Toolchain + info "Toolchain:" + for tool in "go version" "rustc --version" "cargo --version" "protoc --version"; do + if _exec "$tool" >/dev/null 2>&1; then + ok " $tool" + else + error " $tool: NOT FOUND"; failures=$((failures + 1)) + fi + done + + # 2. authd socket + echo "" + info "authd:" + if _exec_root "systemctl is-active --quiet authd.socket"; then + ok " authd.socket is active" + else + error " authd.socket is NOT active"; failures=$((failures + 1)) + fi + + if _exec_root "test -S /run/authd.sock"; then + ok " /run/authd.sock exists" + else + info " /run/authd.sock not yet created (created on first connection)" + fi + + # 3. PAM module + echo "" + info "PAM:" + local pam_dir + pam_dir=$(_exec "dpkg-architecture -qDEB_HOST_MULTIARCH 2>/dev/null || gcc -dumpmachine 2>/dev/null || echo x86_64-linux-gnu" 2>/dev/null) + pam_dir="${pam_dir:-x86_64-linux-gnu}" + if _exec_root "test -f /usr/lib/${pam_dir}/security/pam_authd_exec.so"; then + ok " pam_authd_exec.so installed" + else + error " pam_authd_exec.so NOT found"; failures=$((failures + 1)) + fi + if _exec_root "grep -q authd /usr/share/pam-configs/authd 2>/dev/null"; then + ok " PAM config registered" + else + error " PAM config NOT registered"; failures=$((failures + 1)) + fi + + # 4. NSS module + echo "" + info "NSS:" + if _exec_root "test -f /usr/lib/${pam_dir}/libnss_authd.so.2"; then + ok " libnss_authd.so.2 installed" + else + error " libnss_authd.so.2 NOT found"; failures=$((failures + 1)) + fi + if _exec_root "grep -q authd /etc/nsswitch.conf"; then + ok " nsswitch.conf configured" + else + error " nsswitch.conf NOT configured"; failures=$((failures + 1)) + fi + + # 5. Brokers + echo "" + info "Brokers:" + local broker_count + broker_count=$(_exec_root "ls /etc/authd/brokers.d/*.conf 2>/dev/null | wc -l" || echo "0") + if [[ "$broker_count" -gt 0 ]]; then + ok " ${broker_count} broker(s) configured in /etc/authd/brokers.d/" + _exec "authctl list brokers 2>/dev/null" | sed 's/^/ /' || true + else + info " No brokers configured (run ./dev/dev-env.sh broker to add one)" + fi + + # 6. SSH config + echo "" + info "SSH:" + if _exec_root "test -f /etc/ssh/sshd_config.d/authd.conf"; then + ok " authd SSH config installed" + else + error " authd SSH config NOT found"; failures=$((failures + 1)) + fi + if _exec_root "systemctl is-active --quiet ssh"; then + ok " sshd is running" + else + error " sshd is NOT running"; failures=$((failures + 1)) + fi + + echo "" + if [[ $failures -eq 0 ]]; then + printf '%s\n' "${GREEN}${BOLD}All checks passed!${NC}" + else + printf '%s\n' "${RED}${BOLD}${failures} check(s) failed.${NC}" + fi + return $failures +} + +cmd_test() { + container_running || die "VM '${CONTAINER_NAME}' is not running. Run: ./dev/dev-env.sh up" + + local env_vars="AUTHD_SKIP_ROOT_TESTS=1 " + local go_args=() + for arg in "$@"; do + if [[ "$arg" == "--update-golden" ]]; then + env_vars+="TESTS_UPDATE_GOLDEN=1 " + elif [[ "$arg" == "--skip-external" ]]; then + env_vars+="AUTHD_SKIP_EXTERNAL_DEPENDENT_TESTS=1 " + else + go_args+=("$arg") + fi + done + + # Default to running all tests with race detection per AGENTS.md + [[ ${#go_args[@]} -eq 0 ]] && go_args=("-race" "./...") + + info "Running tests: ${env_vars}go test ${go_args[*]}" + exec_in_vm "cd ${WORKSPACE_PATH} && env ${env_vars} go test ${go_args[*]}" +} + +cmd_build() { + container_running || die "VM '${CONTAINER_NAME}' is not running. Run: ./dev/dev-env.sh up" + + local component="${1:-all}" + local ws="${WORKSPACE_PATH}" + + case "$component" in + authd) + info "Rebuilding authd daemon..." + exec_in_vm "cd ${ws} && ./dev/scripts/install-authd --daemon-only" + ok "authd rebuilt and restarted" + ;; + pam) + info "Rebuilding PAM modules..." + exec_in_vm "cd ${ws} && ./dev/scripts/install-authd --pam-only" + ok "PAM rebuilt (reconnect SSH sessions to load new module)" + ;; + nss) + info "Rebuilding NSS module..." + exec_in_vm "cd ${ws} && ./dev/scripts/install-authd --nss-only" + ok "NSS rebuilt and ldconfig refreshed" + ;; + broker) + local variant="${2:-}" + [[ -n "$variant" ]] || die "Usage: ./dev/dev-env.sh build broker " + local tag="" binary="authd-${variant}" + case "$variant" in + google) tag="-tags=withgoogle" ;; + msentraid) tag="-tags=withmsentraid" ;; + oidc) tag="" ;; + *) die "Unknown variant: $variant. Use: google, msentraid, or oidc" ;; + esac + info "Rebuilding ${variant} broker..." + exec_in_vm "cd ${ws}/authd-oidc-brokers && go build ${tag} -o /tmp/${binary} ./cmd/authd-oidc && sudo install -m 755 /tmp/${binary} /usr/libexec/${binary} && sudo systemctl restart ${binary}" + ok "Broker ${variant} rebuilt and restarted" + ;; + all) + info "Rebuilding everything via install-authd..." + exec_in_vm "cd ${ws} && ./dev/scripts/install-authd" + ;; + *) + cat < Rebuild broker binary + restart (google/msentraid/oidc) + Note: use 'broker ' to update credentials instead + all Run full install-authd (default) +EOF + ;; + esac +} + +cmd_logs() { + container_running || die "VM '${CONTAINER_NAME}' is not running. Run: ./dev/dev-env.sh up" + local target="${1:-all}" + + if [[ "$target" == "all" ]]; then + # Combine logs for authd daemon + all broker variants + PAM module, + # mirroring the maintainer's journal script from adombeck/authd-scripts. + lxc exec "$CONTAINER_NAME" -- journalctl -f \ + _SYSTEMD_UNIT=authd.service + \ + UNIT=authd.service + \ + _SYSTEMD_UNIT=authd-google.service + \ + UNIT=authd-google.service + \ + _SYSTEMD_UNIT=authd-msentraid.service + \ + UNIT=authd-msentraid.service + \ + _SYSTEMD_UNIT=authd-oidc.service + \ + UNIT=authd-oidc.service + \ + _COMM=authd-pam + return + fi + + local unit + case "$target" in + authd) unit="authd" ;; + cloud-init) lxc exec "$CONTAINER_NAME" -- tail -f /var/log/cloud-init-output.log; return ;; + google|msentraid|oidc) unit="authd-${target}" ;; + *) unit="$target" ;; + esac + lxc exec "$CONTAINER_NAME" -- journalctl -fu "$unit" +} + +cmd_exec() { + container_running || die "VM '${CONTAINER_NAME}' is not running. Run: ./dev/dev-env.sh up" + [[ $# -gt 0 ]] || die "Usage: ./dev/dev-env.sh exec " + local cmd + cmd=$(printf ' %q' "$@") + lxc exec "$CONTAINER_NAME" -- su -l ubuntu -c "cd ${WORKSPACE_PATH} &&${cmd}" +} + +cmd_help() { + cat < [command-options] + +${BOLD}Global flags${NC} (must come before the subcommand, apply to all commands): + --name NAME VM name (default: authd-dev) + --release NAME Ubuntu release image (e.g. noble, jammy, 24.04, 22.04) (default: noble) + --workspace PATH Workspace path inside the VM (default: /workspace/authd) + +${BOLD}Commands:${NC} + up Create, provision, and build authd in the dev VM + stop Stop the VM (preserves state; restart with 'up') + down [--force] Stop and delete the VM + shell Open a shell via lxc exec (no SSH needed) + ssh Connect via SSH + status [--deep] Show VM status and snapshots (use --deep for validation) + snapshot Create a named snapshot + restore Restore a named snapshot + broker Configure/install a broker; use 'edit' subaction to open broker.conf + build [comp] Fast rebuild a component (authd/pam/nss/broker/all) + validate Check authd stack health (socket, PAM, NSS, brokers) + test [--update-golden] [--skip-external] ... + logs [target|all] Tail logs (authd/google/msentraid/oidc/cloud-init/all) + exec Run a command inside the VM (in workspace dir) + ip Print the VM's current IP address + help Show this help + +${BOLD}Examples:${NC} + ./dev/dev-env.sh up # Create with defaults (noble) + ./dev/dev-env.sh --release jammy up # Use Ubuntu 22.04 instead + ./dev/dev-env.sh down --force && ./dev/dev-env.sh up # Nuke and rebuild from scratch + ./dev/dev-env.sh build authd # Rebuild authd fast + ./dev/dev-env.sh status --deep # Check if internal components are healthy + ./dev/dev-env.sh logs # Tail all authd logs + +${BOLD}Typical workflow:${NC} + 1. ./dev/dev-env.sh up # Create VM + build authd + 2. ./dev/dev-env.sh broker google \\ # Configure broker (or msentraid/oidc) + --client-id YOUR_ID --client-secret YOUR_SECRET \\ + --ssh-suffixes '@gmail.com' + 3. ssh user@gmail.com@\$(./dev/dev-env.sh ip) # Test PAM login from host + 4. ./dev/dev-env.sh test # Run all tests with race detection + 5. ./dev/dev-env.sh build authd # Rebuild after code changes + 6. ./dev/dev-env.sh logs authd # Debug with logs + 7. ./dev/dev-env.sh restore installed # Reset to freshly built state + +EOF +} + +# --- Main --- + +COMMAND="" +CMD_ARGS=() + +while [[ $# -gt 0 ]]; do + case "$1" in + --name) [[ $# -ge 2 ]] || die "--name requires a value"; CONTAINER_NAME="$2"; PROFILE_NAME="$2"; shift 2 ;; + --release) [[ $# -ge 2 ]] || die "--release requires a value"; RELEASE="$2"; shift 2 ;; + --workspace) [[ $# -ge 2 ]] || die "--workspace requires a value"; WORKSPACE_PATH="$2"; shift 2 ;; + help|--help|-h) COMMAND="help"; shift ;; + -*) CMD_ARGS+=("$1"); shift ;; + *) + if [[ -z "$COMMAND" ]]; then + COMMAND="$1" + else + CMD_ARGS+=("$1") + fi + shift + ;; + esac +done + +COMMAND="${COMMAND:-help}" + +case "$COMMAND" in + up) cmd_up "${CMD_ARGS[@]}" ;; + down) cmd_down "${CMD_ARGS[@]}" ;; + stop) cmd_stop ;; + shell) cmd_shell ;; + ssh) cmd_ssh ;; + status) cmd_status "${CMD_ARGS[@]}" ;; + snapshot) cmd_snapshot "${CMD_ARGS[@]}" ;; + restore) cmd_restore "${CMD_ARGS[@]}" ;; + broker) cmd_broker "${CMD_ARGS[@]}" ;; + build) cmd_build "${CMD_ARGS[@]}" ;; + validate) cmd_validate ;; + test) cmd_test "${CMD_ARGS[@]}" ;; + logs) cmd_logs "${CMD_ARGS[@]}" ;; + exec) cmd_exec "${CMD_ARGS[@]}" ;; + ip) cmd_ip ;; + help) cmd_help ;; + *) die "Unknown command: $COMMAND — run './dev/dev-env.sh help' for usage." ;; +esac diff --git a/dev/lib/common.sh b/dev/lib/common.sh new file mode 100644 index 0000000000..17b8d1db4d --- /dev/null +++ b/dev/lib/common.sh @@ -0,0 +1,140 @@ +#!/usr/bin/env bash +# shellcheck disable=SC2034 +# Shared helpers for authd dev scripts. +# +# Source this from any script under dev/scripts/: +# SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +# source "${SCRIPT_DIR}/../lib/common.sh" + +# Prevent double-sourcing +[[ -n "${_AUTHD_COMMON_SOURCED:-}" ]] && return 0 +_AUTHD_COMMON_SOURCED=1 + +# Variables and functions defined here are used by sourcing scripts +# (install-authd, install-broker), so shellcheck's "appears unused" +# warnings are false positives (suppressed via file-level SC2034 above). + +# --- Path resolution --- +_COMMON_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +DEV_DIR="$(dirname "$_COMMON_DIR")" +PROJECT_DIR="$(dirname "$DEV_DIR")" +WORKSPACE="${WORKSPACE:-${PROJECT_DIR}}" + +# --- Output helpers --- +RED=$'\033[0;31m'; GREEN=$'\033[0;32m'; YELLOW=$'\033[1;33m' +BLUE=$'\033[0;34m'; BOLD=$'\033[1m'; NC=$'\033[0m' + +info() { printf '%s\n' "${BLUE}[INFO]${NC} $*"; } +ok() { printf '%s\n' "${GREEN}[ OK]${NC} $*"; } +warn() { printf '%s\n' "${YELLOW}[WARN]${NC} $*"; } +error() { printf '%s\n' "${RED}[ERR ]${NC} $*" >&2; } +die() { error "$@"; exit 1; } + +# Escape a string for use as a sed replacement value. +# Handles |, &, /, and backslash. +sed_escape() { printf '%s\n' "$1" | sed -e 's/[|&/\\]/\\&/g'; } + +# --- Host-side Helpers (moved from dev-env.sh for consistency) --- + +detect_ssh_key() { + local key_files=( + "${HOME}/.ssh/id_ed25519.pub" + "${HOME}/.ssh/id_rsa.pub" + "${HOME}/.ssh/id_ecdsa.pub" + ) + for kf in "${key_files[@]}"; do + if [[ -f "$kf" ]]; then + local private_key="${kf%.pub}" + [[ -f "$private_key" ]] || die "Public key found at ${kf} but private key missing: ${private_key}" + [[ -r "$private_key" ]] || die "Private key not readable: ${private_key} (check permissions)" + echo "$kf" + return 0 + fi + done + die "No SSH public key found. Generate one with: ssh-keygen -t ed25519" +} + +# Return the private key path for the detected SSH key. +get_ssh_private_key() { + local pub_key + pub_key=$(detect_ssh_key) + echo "${pub_key%.pub}" +} + +# --- Common paths (matching debian/install) --- +MULTIARCH=$(dpkg --print-multiarch 2>/dev/null || gcc -dumpmachine 2>/dev/null || echo "$(uname -m)-linux-gnu") +PAM_MODULE_DIR="/usr/lib/${MULTIARCH}/security" +NSS_LIB_DIR="/usr/lib/${MULTIARCH}" +DAEMONS_PATH="/usr/libexec" + +# --- Environment --- + +ensure_path() { + export PATH="/usr/local/go/bin:${HOME}/.cargo/bin:${PATH}" +} + +ensure_workspace() { + cd "$WORKSPACE" || die "Cannot cd to workspace: $WORKSPACE" +} + +ensure_git_submodules() { + if [[ -f "${WORKSPACE}/.gitmodules" ]]; then + if ! git config --global --get-all safe.directory 2>/dev/null | grep -qxF "${WORKSPACE}"; then + git config --global --add safe.directory "${WORKSPACE}" 2>/dev/null || true + fi + (cd "${WORKSPACE}" && git submodule update --init --recursive) || \ + warn "Failed to update submodules (may not be a git repo or already initialized)" + fi +} + +# --- Broker variant configuration --- +# +# Sets variant-specific variables. D-Bus metadata (DISPLAY_NAME, DBUS_NAME, +# DBUS_OBJECT) is read from the upstream template at +# authd-oidc-brokers/conf/variants//authd.conf so that the dev +# scripts stay in sync with the broker source automatically. +load_variant_config() { + local variant="$1" + + VARIANT_CONF_DIR="${WORKSPACE}/authd-oidc-brokers/conf/variants/${variant}" + + case "$variant" in + google) + BUILD_TAG="withgoogle" + BINARY_NAME="authd-google" + CONF_DIR="/etc/authd-google" + SERVICE_NAME="authd-google" + DEFAULT_ISSUER="https://accounts.google.com" + ;; + msentraid) + BUILD_TAG="withmsentraid" + BINARY_NAME="authd-msentraid" + CONF_DIR="/etc/authd-msentraid" + SERVICE_NAME="authd-msentraid" + DEFAULT_ISSUER="" + ;; + oidc) + BUILD_TAG="" + BINARY_NAME="authd-oidc" + CONF_DIR="/etc/authd-oidc" + SERVICE_NAME="authd-oidc" + DEFAULT_ISSUER="" + ;; + *) + die "Unknown variant: ${variant}. Use: google, msentraid, or oidc" + ;; + esac + + # Read D-Bus metadata from upstream authd.conf template + local authd_conf="${VARIANT_CONF_DIR}/authd.conf" + if [[ -f "$authd_conf" ]]; then + DISPLAY_NAME=$(sed -n 's/^name = //p' "$authd_conf") + DBUS_NAME=$(sed -n 's/^dbus_name = //p' "$authd_conf") + DBUS_OBJECT=$(sed -n 's/^dbus_object = //p' "$authd_conf") + else + warn "Upstream config not found: ${authd_conf} — using variant name as display name" + DISPLAY_NAME="$variant" + DBUS_NAME="" + DBUS_OBJECT="" + fi +} diff --git a/dev/scripts/install-authd b/dev/scripts/install-authd new file mode 100755 index 0000000000..f932d1084b --- /dev/null +++ b/dev/scripts/install-authd @@ -0,0 +1,263 @@ +#!/usr/bin/env bash +# Build and install authd inside the dev VM for integration testing. +# +# Builds authd, PAM modules, and the NSS module, then installs them +# to system paths and configures PAM, NSS, and systemd — mirroring what +# the Debian package does (see debian/install, debian/postinst). +# +# Run inside the VM: +# cd /workspace/authd && ./dev/scripts/install-authd +# +# After installation, test SSH login from your host: +# ssh user@example.com@authd-dev + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +source "${SCRIPT_DIR}/../lib/common.sh" + +BUILD_DAEMON=true +BUILD_PAM=true +BUILD_NSS=true +FULL_INSTALL=true + +while [[ $# -gt 0 ]]; do + case "$1" in + --daemon-only) BUILD_PAM=false; BUILD_NSS=false; FULL_INSTALL=false; shift ;; + --pam-only) BUILD_DAEMON=false; BUILD_NSS=false; FULL_INSTALL=false; shift ;; + --nss-only) BUILD_DAEMON=false; BUILD_PAM=false; FULL_INSTALL=false; shift ;; + --all) BUILD_DAEMON=true; BUILD_PAM=true; BUILD_NSS=true; FULL_INSTALL=true; shift ;; + --help|-h) + echo "Usage: ./dev/scripts/install-authd [--daemon-only|--pam-only|--nss-only|--all]" + echo "" + echo "Options:" + echo " --daemon-only Build and install only the authd daemon + authctl" + echo " --pam-only Build and install only PAM modules" + echo " --nss-only Build and install only the NSS module" + echo " --all Build and install all components (default)" + exit 0 ;; + *) die "Unknown option: $1 (supported: --daemon-only, --pam-only, --nss-only, --all)" ;; + esac +done + +ensure_path +ensure_workspace + +# Pre-flight: verify required tools are available +command -v go >/dev/null 2>&1 || die "Go not found. Check cloud-init logs: cat /var/log/cloud-init-output.log" +command -v cargo >/dev/null 2>&1 || die "Rust/Cargo not found. Check cloud-init logs: cat /var/log/cloud-init-output.log" +command -v protoc >/dev/null 2>&1 || die "protoc not found. Install: sudo apt install protobuf-compiler" + +BUILD_DIR=$(mktemp -d /tmp/authd-build.XXXXXX) +trap 'rm -rf "$BUILD_DIR"' EXIT + +printf '\n%s\n' "${BOLD}Installing authd for integration testing${NC}" +echo " Source: ${WORKSPACE}" +echo " Daemons: ${DAEMONS_PATH}" +echo " PAM modules: ${PAM_MODULE_DIR}" +echo " NSS library: ${NSS_LIB_DIR}" +echo "" + +# ============================================================ +# Step 0: Initialize git submodules +# ============================================================ + +info "Ensuring git submodules are initialized..." +ensure_git_submodules + +# ============================================================ +# Step 1: Build all components +# ============================================================ + +if $BUILD_DAEMON; then + info "Regenerating protobufs (go generate ./internal/proto/authd/)..." + go generate ./internal/proto/authd/ + ok "Protobufs generated" + + # The GDM PAM protocol has its own proto (pam/internal/gdm/gdm.proto) that + # imports from internal/proto/authd — must be regenerated after the above. + info "Regenerating GDM PAM protocol (go generate ./pam/internal/gdm/)..." + go generate ./pam/internal/gdm/ + ok "GDM PAM protocol generated" + + info "Regenerating shell completions (go generate ./shell-completion/)..." + go generate ./shell-completion/ + ok "Shell completions generated" + + # --- authd daemon --- + info "Building authd daemon..." + go build -o "${BUILD_DIR}/authd" ./cmd/authd + ok "authd daemon built" + + # --- authctl CLI --- + info "Building authctl..." + go build -o "${BUILD_DIR}/authctl" ./cmd/authctl + ok "authctl built" +fi + +if $BUILD_PAM; then + # --- PAM modules --- + # go generate builds both pam_authd.so (GDM) and pam_authd_exec.so (generic). + # The exec .so is a C library built by pam/generate.sh. + info "Generating PAM modules (go generate ./pam/)..." + go generate ./pam/ + ok "PAM module generation complete" + + # Build the authd-pam Go binary (companion to pam_authd_exec.so) + info "Building authd-pam exec binary..." + go build -tags pam_binary_exec -o "${BUILD_DIR}/authd-pam" ./pam + ok "authd-pam built" + + # Verify the C shared libraries were generated + if [[ ! -f pam/go-exec/pam_authd_exec.so ]]; then + die "pam_authd_exec.so not found after go generate. Check build output." + fi + ok "pam_authd_exec.so generated" + + if [[ -f pam/pam_authd.so ]]; then + ok "pam_authd.so generated (GDM module)" + else + warn "pam_authd.so not generated — GDM module unavailable (fine for SSH testing)" + fi +fi + +if $BUILD_NSS; then + # --- NSS module (Rust) --- + info "Building NSS module..." + cargo build --release -p nss + ok "NSS module built" +fi + +# ============================================================ +# Step 2: Install to system paths +# ============================================================ + +info "Installing binaries..." +if $BUILD_DAEMON; then + sudo install -m 755 "${BUILD_DIR}/authd" "${DAEMONS_PATH}/authd" + sudo install -m 755 "${BUILD_DIR}/authctl" /usr/bin/authctl + ok "Binaries installed to ${DAEMONS_PATH}" +fi + +if $BUILD_PAM; then + sudo install -m 755 "${BUILD_DIR}/authd-pam" "${DAEMONS_PATH}/authd-pam" + info "Installing PAM modules..." + sudo mkdir -p "$PAM_MODULE_DIR" + sudo install -m 644 pam/go-exec/pam_authd_exec.so "${PAM_MODULE_DIR}/" + if [[ -f pam/pam_authd.so ]]; then + sudo install -m 644 pam/pam_authd.so "${PAM_MODULE_DIR}/" + fi + ok "PAM modules installed to ${PAM_MODULE_DIR}" +fi + +if $BUILD_NSS; then + info "Installing NSS module..." + sudo install -m 644 target/release/libnss_authd.so "${NSS_LIB_DIR}/libnss_authd.so.2" + sudo ldconfig + ok "NSS module installed to ${NSS_LIB_DIR}" +fi + +# ============================================================ +# Step 3: Configure the system +# ============================================================ + +if ! $FULL_INSTALL; then + info "Partial build complete. Skipping full system configuration." + if $BUILD_DAEMON; then + sudo systemctl restart authd.socket 2>/dev/null || true + ok "authd daemon restarted" + fi + exit 0 +fi + +# --- authd directories --- +info "Creating authd state/config directories..." +sudo mkdir -p /etc/authd/brokers.d +sudo chmod 700 /etc/authd +sudo mkdir -p /var/lib/authd +sudo chmod 700 /var/lib/authd + +# --- Default config --- +if [[ ! -f /etc/authd/authd.yaml ]]; then + sudo install -m 600 "${WORKSPACE}/debian/authd-config/authd.yaml" /etc/authd/authd.yaml + ok "Default authd.yaml installed to /etc/authd/" +else + info "/etc/authd/authd.yaml already exists, not overwriting" +fi + +# --- NSSwitch (mirrors debian/postinst) --- +info "Configuring nsswitch.conf..." +# Use the same sed pattern as debian/postinst: word-boundary match to avoid +# false positives like "authd-other", and handle all three databases at once. +if [[ -e /etc/nsswitch.conf ]]; then + sudo sed -i --regexp-extended ' + /^(passwd|group|shadow):/ { + /\bauthd\b/! s/$/ authd/ + } + ' /etc/nsswitch.conf + ok "nsswitch.conf configured (passwd, group, shadow)" +else + warn "/etc/nsswitch.conf not found" +fi + +# --- PAM (mirrors debian/postinst) --- +info "Configuring PAM..." +sudo mkdir -p /usr/share/pam-configs +sed "s|@AUTHD_DAEMONS_PATH@|${DAEMONS_PATH}|g" "${WORKSPACE}/debian/pam-configs/authd.in" | \ + sudo tee /usr/share/pam-configs/authd > /dev/null || die "Failed to install PAM config" +sudo pam-auth-update --package +ok "PAM configured via pam-auth-update" + +# --- Systemd units --- +info "Installing systemd units..." +sed "s|@AUTHD_DAEMONS_PATH@|${DAEMONS_PATH}|g" "${WORKSPACE}/debian/authd.service.in" | \ + sudo tee /etc/systemd/system/authd.service > /dev/null +sudo cp "${WORKSPACE}/debian/authd.socket" /etc/systemd/system/authd.socket + +# Enable verbose logging for dev environment (mirrors e2e-tests/vm/provision-authd.sh). +# This adds -vvv to the ExecStart line for maximum debugging verbosity (per AGENTS.md). +sudo mkdir -p /etc/systemd/system/authd.service.d +sudo tee /etc/systemd/system/authd.service.d/dev-verbose.conf > /dev/null < [options] + +${BOLD}Variants:${NC} + google Google IAM (build tag: withgoogle) + msentraid Microsoft Entra ID (build tag: withmsentraid) + oidc Generic OIDC / Keycloak (no build tag) + +${BOLD}Credential options:${NC} + --client-id ID OAuth2 client ID + --client-secret SEC OAuth2 client secret + --issuer URL OIDC issuer URL (defaults: google → accounts.google.com) + --ssh-suffixes LIST Comma-separated suffixes for first-time SSH login + (e.g., '@gmail.com,@company.org' or '*' for all) + --allowed-users LIST Who can log in: OWNER (default in dev), ALL, or usernames + +${BOLD}Install options:${NC} + --rebuild Rebuild the broker binary from source and reinstall the D-Bus + policy and systemd service. Does NOT touch broker.conf — all + credentials are preserved as-is. Cannot be combined with any + credential flags. To update credentials instead, use: + './dev/dev-env.sh broker conf' + +${BOLD}Examples:${NC} + + # First-time install (auto-detects, builds from source): + ./dev/scripts/install-broker google \\ + --client-id 843411...googleusercontent.com \\ + --client-secret GOCSPX-... \\ + --ssh-suffixes '@gmail.com' + + # Update a credential (auto-detects existing binary, skips rebuild): + ./dev/scripts/install-broker google --client-secret NEW_SECRET + + # Force a full rebuild after changing broker source: + ./dev/scripts/install-broker google --rebuild + + # Microsoft Entra ID: + ./dev/scripts/install-broker msentraid \\ + --issuer https://login.microsoftonline.com/TENANT_ID/v2.0 \\ + --client-id YOUR_CLIENT_ID \\ + --ssh-suffixes '@yourdomain.com' + +EOF + exit "${1:-0}" +} + +# --- Parse arguments --- + +VARIANT="${1:-}" +[[ -n "$VARIANT" ]] || usage 1 +[[ "$VARIANT" == "--help" || "$VARIANT" == "-h" ]] && usage 0 +shift + +ISSUER="" +CLIENT_ID="" +CLIENT_SECRET="" +SSH_SUFFIXES="" +ALLOWED_USERS="OWNER" # Default to OWNER in dev environments +REBUILD=false +_has_creds=false + +while [[ $# -gt 0 ]]; do + case "$1" in + --issuer) ISSUER="$2"; _has_creds=true; shift 2 ;; + --client-id) CLIENT_ID="$2"; _has_creds=true; shift 2 ;; + --client-secret) CLIENT_SECRET="$2"; _has_creds=true; shift 2 ;; + --ssh-suffixes) SSH_SUFFIXES="$2"; _has_creds=true; shift 2 ;; + --allowed-users) ALLOWED_USERS="$2"; _has_creds=true; shift 2 ;; + --rebuild) REBUILD=true; shift ;; + --help|-h) usage 0 ;; + *) die "Unknown option: $1" ;; + esac +done + +# --- Load variant config from upstream templates --- + +ensure_workspace +load_variant_config "$VARIANT" + +# --rebuild cannot be combined with credential flags. +if $REBUILD && $_has_creds; then + die "--rebuild only rebuilds the binary; credential flags are not accepted. To update credentials: ./dev/dev-env.sh broker ${VARIANT} conf" +fi + +# Apply default issuer if variant defines one and user didn't specify +ISSUER="${ISSUER:-${DEFAULT_ISSUER}}" + +# Auto-detect whether to do a full install or reconfigure-only. +# Skip the build when the broker binary and discovery config are already present. +# (--rebuild bypasses auto-detection and always enters the build path.) +CONFIGURE_ONLY=false +if ! $REBUILD && [[ -x "/usr/libexec/${BINARY_NAME}" ]] && sudo test -f "/etc/authd/brokers.d/${VARIANT}.conf"; then + CONFIGURE_ONLY=true +fi + +# Warn when --ssh-suffixes is absent on a fresh install: without +# ssh_allowed_suffixes_first_auth set, authd rejects first-time SSH users and PAM +# falls through to the generic password prompt instead of broker selection. +if [[ -z "$SSH_SUFFIXES" ]] && ! $CONFIGURE_ONLY && ! $REBUILD; then + warn "No --ssh-suffixes provided: new users will NOT be able to log in via SSH for" + warn "the first time. Re-run with --ssh-suffixes '@yourdomain.com'" + warn "(or '*' for any domain) to enable first-time SSH login." +fi + +# --- Validate --- + +[[ -n "$DBUS_NAME" ]] || die "Could not determine D-Bus name for variant '${VARIANT}'" + +if $REBUILD; then + [[ -x "/usr/libexec/${BINARY_NAME}" ]] && sudo test -f "${CONF_DIR}/broker.conf" || \ + die "Broker '${VARIANT}' is not installed. Run without --rebuild for a first-time install." +elif $CONFIGURE_ONLY; then + sudo test -f "${CONF_DIR}/broker.conf" || \ + die "${CONF_DIR}/broker.conf not found. Check the install." + info "Patching credentials in existing ${CONF_DIR}/broker.conf..." +fi + +if $REBUILD; then + printf '\n%s\n' "${BOLD}Rebuilding ${DISPLAY_NAME} broker from source${NC}" +elif $CONFIGURE_ONLY; then + printf '\n%s\n' "${BOLD}Reconfiguring ${DISPLAY_NAME} broker${NC}" +else + printf '\n%s\n' "${BOLD}Installing ${DISPLAY_NAME} broker from source${NC}" +fi +echo " Variant: ${VARIANT}" +echo " Build tag: ${BUILD_TAG:-none}" +echo " D-Bus name: ${DBUS_NAME}" +echo " Issuer: ${ISSUER:-(not set)}" +if [[ -n "$CLIENT_ID" ]]; then + echo " Client ID: ${CLIENT_ID:0:20}..." +else + echo " Client ID: (not set)" +fi +echo "" + +# ============================================================ +# Step 1: Build the broker binary +# ============================================================ + +if ! $CONFIGURE_ONLY; then + +BROKER_SRC="${WORKSPACE}/authd-oidc-brokers" +[[ -d "$BROKER_SRC" ]] || die "Broker source not found: ${BROKER_SRC}" + +info "Ensuring git submodules are initialized..." +ensure_git_submodules + +# For msentraid, we need to build libhimmelblau first +if [[ "$VARIANT" == "msentraid" ]]; then + info "Pre-building libhimmelblau C library (required for msentraid)..." + if [[ -f "${BROKER_SRC}/internal/providers/msentraid/himmelblau/generate.sh" ]]; then + (cd "${BROKER_SRC}" && bash ./internal/providers/msentraid/himmelblau/generate.sh) || \ + warn "Failed to generate libhimmelblau (it may already be built)" + ok "libhimmelblau build complete" + else + warn "generate.sh script not found" + fi +fi + +info "Building broker binary (${BINARY_NAME})..." +BUILD_DIR=$(mktemp -d /tmp/authd-broker-build.XXXXXX) +trap 'rm -rf "$BUILD_DIR"' EXIT + +local_build_flags=() +if [[ -n "$BUILD_TAG" ]]; then + local_build_flags=("-tags=${BUILD_TAG}") +fi + +(cd "$BROKER_SRC" && go build "${local_build_flags[@]+"${local_build_flags[@]}"}" -o "${BUILD_DIR}/${BINARY_NAME}" ./cmd/authd-oidc) +ok "Broker binary built" + +# ============================================================ +# Step 2: Install binary +# ============================================================ + +info "Installing broker binary..." +sudo install -m 755 "${BUILD_DIR}/${BINARY_NAME}" "/usr/libexec/${BINARY_NAME}" +ok "Installed to /usr/libexec/${BINARY_NAME}" + +# ============================================================ +# Step 3: D-Bus system policy +# ============================================================ + +info "Installing D-Bus policy..." +sudo tee "/usr/share/dbus-1/system.d/${DBUS_NAME}.conf" > /dev/null < + + + + + + + + + + + + + + +DBUS_EOF +ok "D-Bus policy installed" + +# Reload D-Bus to pick up the new policy +sudo systemctl reload dbus 2>/dev/null || true + +# ============================================================ +# Step 4: authd broker discovery config (from upstream template) +# ============================================================ + +info "Creating authd discovery config..." +sudo mkdir -p /etc/authd/brokers.d + +local_authd_conf="${VARIANT_CONF_DIR}/authd.conf" +[[ -f "$local_authd_conf" ]] || die "Missing upstream template: ${local_authd_conf}" + +# Copy upstream template and clear the snap-specific brand_icon path +sudo install -m 644 "$local_authd_conf" "/etc/authd/brokers.d/${VARIANT}.conf" +sudo sed -i 's|^brand_icon = .*|brand_icon =|' "/etc/authd/brokers.d/${VARIANT}.conf" +ok "Discovery config: /etc/authd/brokers.d/${VARIANT}.conf (from upstream template)" + +fi # end: ! $CONFIGURE_ONLY + +# ============================================================ +# Step 5: Broker configuration (from upstream template) +# ============================================================ + +# --rebuild preserves broker.conf exactly as-is; skip this step entirely. +if ! $REBUILD; then + +info "Configuring broker credentials..." + +local_broker_conf="${VARIANT_CONF_DIR}/broker.conf" +[[ -f "$local_broker_conf" ]] || die "Missing upstream template: ${local_broker_conf}" + +sudo mkdir -p -m 700 "${CONF_DIR}" + +if $CONFIGURE_ONLY; then + # Reconfigure path: broker.conf already exists; the sed substitutions below apply directly. + true +else + # Fresh install: copy the upstream template as the starting point for sed substitutions. + sudo install -m 600 "$local_broker_conf" "${CONF_DIR}/broker.conf" +fi + +# Substitute credential placeholders and set the issuer URL. +# sed_escape handles special characters (|, &, /, \) in user-provided values. +if [[ -n "$ISSUER" ]]; then + sudo sed -i "s|^issuer = .*|issuer = $(sed_escape "$ISSUER")|" "${CONF_DIR}/broker.conf" +fi +if [[ -n "$CLIENT_ID" ]]; then + sudo sed -i "s|^client_id = .*|client_id = $(sed_escape "$CLIENT_ID")|" "${CONF_DIR}/broker.conf" +fi + +# Handle client_secret: update if present (uncommented or commented). +# broker.conf is mode 600 (root-only), so grep must be run with sudo. +if [[ -n "$CLIENT_SECRET" ]]; then + secret_esc=$(sed_escape "$CLIENT_SECRET") + if sudo grep -q '^client_secret = ' "${CONF_DIR}/broker.conf"; then + sudo sed -i "s|^client_secret = .*|client_secret = ${secret_esc}|" "${CONF_DIR}/broker.conf" + elif sudo grep -qE '^#\s*client_secret' "${CONF_DIR}/broker.conf"; then + sudo sed -i -E "s|^#\s*client_secret.*|client_secret = ${secret_esc}|" "${CONF_DIR}/broker.conf" + fi +fi + +# Uncomment and set SSH suffixes if provided. +# Match both commented (#ssh_allowed...) and already-live (ssh_allowed...) lines +# so re-running with a new value actually updates it. +if [[ -n "$SSH_SUFFIXES" ]]; then + if sudo grep -qE '^ssh_allowed_suffixes_first_auth = ' "${CONF_DIR}/broker.conf"; then + sudo sed -i -E "s|^ssh_allowed_suffixes_first_auth = .*|ssh_allowed_suffixes_first_auth = $(sed_escape "$SSH_SUFFIXES")|" "${CONF_DIR}/broker.conf" + else + sudo sed -i -E "s|^#\s*ssh_allowed_suffixes_first_auth.*|ssh_allowed_suffixes_first_auth = $(sed_escape "$SSH_SUFFIXES")|" "${CONF_DIR}/broker.conf" + fi +fi + +# Uncomment and set allowed_users if provided. +# Match both commented and already-live lines. +if [[ -n "$ALLOWED_USERS" ]]; then + if sudo grep -qE '^allowed_users = ' "${CONF_DIR}/broker.conf"; then + sudo sed -i -E "s|^allowed_users = .*|allowed_users = $(sed_escape "$ALLOWED_USERS")|" "${CONF_DIR}/broker.conf" + else + sudo sed -i -E "s|^#\s*allowed_users.*|allowed_users = $(sed_escape "$ALLOWED_USERS")|" "${CONF_DIR}/broker.conf" + fi +fi +# Clear any unfilled angle-bracket placeholders (e.g. ) left over +# when credentials weren't provided. The config parser rejects '<' and '>'. +sudo sed -i -E 's/^([a-z_]+ = ).*<[^>]+>.*$/\1/' "${CONF_DIR}/broker.conf" +ok "Broker config: ${CONF_DIR}/broker.conf (from upstream template)" + +fi # end: ! $REBUILD + +# ============================================================ +# Step 6: Systemd service +# ============================================================ + +# Guard against empty SERVICE_NAME (would produce a broken unit file). +[[ -n "$SERVICE_NAME" ]] || die "BUG: SERVICE_NAME is empty for variant '${VARIANT}' — check load_variant_config in common.sh" + +if ! $CONFIGURE_ONLY; then + info "Installing systemd service..." + sudo tee "/etc/systemd/system/${SERVICE_NAME}.service" > /dev/null </dev/null || true) +fi + +if [[ -n "$_effective_client_id" ]]; then + sudo systemctl enable "${SERVICE_NAME}.service" + sudo systemctl restart "${SERVICE_NAME}.service" + ok "Service ${SERVICE_NAME}.service enabled and started" +else + # Disable (in case a previous run had enabled it) so it doesn't auto-start + # on reboot before credentials are in place. + sudo systemctl disable "${SERVICE_NAME}.service" 2>/dev/null || true + sudo systemctl stop "${SERVICE_NAME}.service" 2>/dev/null || true + warn "${SERVICE_NAME} installed but NOT enabled — no credentials provided" + warn "Configure ${CONF_DIR}/broker.conf then:" + warn " sudo systemctl enable --now ${SERVICE_NAME}" +fi + +# ============================================================ +# Step 7: Restart authd to discover the new broker +# ============================================================ + +info "Restarting authd to discover new broker..." +sudo systemctl restart authd.service 2>/dev/null || \ + sudo systemctl restart authd.socket 2>/dev/null || true +ok "authd restarted" + +# ============================================================ +# Step 8: Verify +# ============================================================ + +echo "" +info "Verification:" + +if sudo systemctl is-active --quiet "${SERVICE_NAME}.service"; then + ok "${SERVICE_NAME}.service is active" +else + warn "${SERVICE_NAME}.service is NOT active" + warn "Check logs: sudo journalctl -u ${SERVICE_NAME}.service -e" +fi + +# Check if authd can see the broker via its discovery config. +# /etc/authd/ is mode 700 (root-only), so sudo is required for the test. +echo "" +info "Current broker discovery configs:" +if sudo test -f "/etc/authd/brokers.d/${VARIANT}.conf"; then + ok "Broker config installed: /etc/authd/brokers.d/${VARIANT}.conf" +else + warn "Broker config missing: /etc/authd/brokers.d/${VARIANT}.conf (internal error)" +fi + +echo "" +printf '%s\n' "${GREEN}${BOLD}${DISPLAY_NAME} broker installed successfully!${NC}" +echo "" +printf '%s\n' "${BOLD}Configuration:${NC}" +echo " Edit the configuration file to set your client ID and secret:" +echo " sudo nano ${CONF_DIR}/broker.conf" +echo "" +echo " Then restart the broker:" +echo " sudo systemctl restart ${SERVICE_NAME}" +echo "" +printf '%s\n' "${BOLD}Test SSH login from your host:${NC}" +echo " ssh user@domain.com@\$(./dev/dev-env.sh ip)" +echo "" +printf '%s\n' "${BOLD}Useful commands:${NC}" +echo " sudo journalctl -u ${SERVICE_NAME} -f # Broker logs" +echo " sudo journalctl -u authd -f # authd logs" +echo " ls /etc/authd/brokers.d/ # List registered brokers" +echo " sudo systemctl restart ${SERVICE_NAME} # Restart broker" +echo ""