Automated OpenClaw deployment on Proxmox using Terraform + Ansible. TLS, authentication, and git integration included.
Infrastructure as Code (IaC) for deploying OpenClaw on Proxmox VE. From zero to AI coding assistant in minutes.
Proxiclaw automates the complete setup of OpenClaw (an AI-powered coding assistant) on your Proxmox infrastructure using a two-phase deployment approach:
Phase 1: Terraform β Creates and provisions Ubuntu VMs on Proxmox Phase 2: Ansible β Installs and configures OpenClaw with all dependencies
The result: A fully functional, secure AI coding assistant with HTTPS, authentication, git integration, and automated backups - all configured automatically.
This project uses Terraform and Ansible for different purposes, following infrastructure best practices:
ποΈ Terraform: Infrastructure Provisioning
- What it does: Creates virtual machines on Proxmox
- Handles: VM resources (CPU, memory, disk), networking, cloud-init configuration
- Output: A running Ubuntu 22.04 VM ready for software installation
- Runs: Once to provision the VM (or to modify/destroy infrastructure)
βοΈ Ansible: Configuration Management
- What it does: Configures the VM and installs OpenClaw
- Handles: Docker installation, OpenClaw deployment, TLS setup, API keys, backups
- Output: Fully configured OpenClaw instance with all integrations working
- Runs: Once for initial setup, or repeatedly to update configuration (idempotent)
The Flow:
Terraform β Creates VM on Proxmox β Gets IP address
β
Ansible β Connects to VM β Installs everything β OpenClaw ready!
- π Security: HTTPS with auto-generated TLS certificates and token-based authentication
- π Git Integration: Automatic SSH key mounting for private repository access
- π€ AI Models: Claude 3.5 Sonnet configured by default (cost-optimized)
- πΎ Backups: Automated configuration backups to git repositories
- π¦ Complete Setup: API keys, models, and workspace ready out of the box
- Features
- Quick Start
- Detailed Setup Guide - First-time Proxmox configuration
- Configuration - Terraform and Ansible variables
- Accessing OpenClaw - Authentication and device pairing
- Troubleshooting
- Documentation - Additional guides
β¨ Fully Automated Deployment
- Single command deployment: Terraform β VM β Ansible β OpenClaw running
- Zero manual Docker or configuration required
- Idempotent playbooks (safe to re-run and update)
π Production-Ready Security
- HTTPS with auto-generated TLS certificates (or bring your own)
- Token-based authentication with device pairing
- SSH key mounting for secure git operations
- API keys automatically configured (never stored in git)
π οΈ Developer Optimized
- Private GitHub/GitLab repo access via SSH
- Git config automatically mounted
- Claude 3.5 Sonnet configured by default (best coding performance)
- Workspace persistence across restarts
- Automated configuration backups to git
π Comprehensive Documentation
- Step-by-step setup guides for every component
- Common commands reference
- Troubleshooting guides
- Multiple SSL/TLS configuration options
Prerequisites: Terraform, Ansible, and a Proxmox server with API access. First time? See Detailed Setup Guide below.
# 1. Clone and navigate to the project
git clone https://github.com/btotharye/proxiclaw.git
cd proxiclaw
# 2. Configure Terraform (tells it HOW to create the VM)
cp terraform/terraform.tfvars.example terraform/terraform.tfvars
vim terraform/terraform.tfvars # Edit: Proxmox host, API token, VM specs, storage
# 3. Configure Ansible (tells it WHAT to install on the VM)
cp ansible/inventory/group_vars/all.yml.example ansible/inventory/group_vars/all.yml
vim ansible/inventory/group_vars/all.yml # Edit: API keys, models, SSL options
# 4. PHASE 1: Create VM with Terraform
cd terraform
terraform init
terraform apply # Creates Ubuntu VM on Proxmox
# 5. Get the new VM's IP address
terraform output vm_ip_address
# 6. Update Ansible inventory with the VM IP
cd ../ansible
vim inventory/hosts # Add the IP from step 5
# 7. PHASE 2: Install OpenClaw with Ansible
ansible-playbook -i inventory/hosts playbooks/site.yml # Installs Docker, OpenClaw, etc.
# 8. Access OpenClaw in your browser
# https://<vm-ip>:18789See docs/QUICK_START.md for detailed walkthrough with screenshots.
Before running Proxiclaw, ensure you have:
On Your Local Machine:
- Terraform >= 1.0
- Ansible >= 2.9 (
brew install ansibleon macOS orpip3 install ansible) - Python 3.12+
- SSH key pair (
~/.ssh/id_rsa.pub) - ssh-agent with your key loaded:
ssh-add ~/.ssh/id_rsa
On Your Proxmox Server:
- Proxmox VE 7.0+
- API token with necessary permissions
- Ubuntu 22.04 cloud-init template (see below)
- SSH access to Proxmox host
- Local datastore configured to support 'snippets' content type
- Storage for VM disks (e.g., local-zfs, local-lvm)
- Network bridge configured (e.g., vmbr0)
Complete these steps once before your first deployment:
Before running Terraform, you need to set up SSH key authentication to your Proxmox host (required for cloud-init file uploads):
# Add your SSH key to the Proxmox host
ssh-copy-id root@your-proxmox-host
# Verify key authentication works
ssh root@your-proxmox-host "echo 'SSH key auth working'"
# Ensure your key is loaded in ssh-agent
ssh-add -L # Should list your keys
# If empty, add your key:
ssh-add ~/.ssh/id_rsaThe Terraform configuration uses cloud-init snippets which must be enabled on a directory-based datastore:
# SSH to your Proxmox host
ssh root@your-proxmox-host
# Check current datastores
pvesm status
# Enable snippets on the 'local' datastore
pvesm set local --content backup,iso,vztmpl,snippets
# Verify snippets are enabled
pvesm status -content snippetsBefore configuring Terraform, identify your storage and network setup:
# SSH to your Proxmox host
ssh root@your-proxmox-host
# Check available storage for VM disks
pvesm status
# Note the storage names (e.g., local-zfs, local-lvm, local)
# Check network bridges
ip link show | grep vmbr
# Note your bridge names (e.g., vmbr0, vmbr30)Update your terraform/terraform.tfvars with the correct storage and bridge names from above.
Before deploying, you need to create an API token in Proxmox:
- Log into your Proxmox web interface (https://your-proxmox-ip:8006)
- Navigate to Datacenter β Permissions β API Tokens
- Click the Add button
- Fill in the token details:
- User:
root@pam - Token ID:
terraform(or your preferred name) - Privilege Separation: Unchecked (uncheck this for full access)
- User:
- Click Add
- Important: Copy and save the token secret immediately - it won't be shown again!
Your API token ID will be in the format: root@pam!terraform
Test your API token from your local machine:
curl -k "https://your-proxmox-ip:8006/api2/json/nodes" \
-H "Authorization: PVEAPIToken=root@pam!terraform=your-secret-token-here"You should receive a JSON response with your Proxmox node information.
For automated VM creation, create an Ubuntu cloud-init template:
# SSH to your Proxmox host
ssh root@your-proxmox-host
# Download Ubuntu 22.04 cloud image
wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
# Check available storage and note the storage name and network bridge
pvesm status
ip link show | grep vmbr
# Create a VM template (ID 9000)
# IMPORTANT: Replace 'local-lvm' with YOUR storage name from pvesm status output
# IMPORTANT: Replace 'vmbr0' with YOUR bridge name from ip link show output
# Common storage names: local, local-lvm, local-zfs
# Common bridge names: vmbr0, vmbr30
# Example using local-lvm and vmbr0:
qm create 9000 --name ubuntu-2204-cloudinit --memory 2048 --net0 virtio,bridge=vmbr0
qm importdisk 9000 jammy-server-cloudimg-amd64.img local-lvm
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
qm set 9000 --ide2 local-lvm:cloudinit
qm set 9000 --boot c --bootdisk scsi0
qm set 9000 --serial0 socket --vga serial0
qm set 9000 --agent enabled=1
# Example using local-zfs and vmbr30:
# qm create 9000 --name ubuntu-2204-cloudinit --memory 2048 --net0 virtio,bridge=vmbr30
# qm importdisk 9000 jammy-server-cloudimg-amd64.img local-zfs
# qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-zfs:vm-9000-disk-0
# qm set 9000 --ide2 local-zfs:cloudinit
# qm set 9000 --boot c --bootdisk scsi0
# qm set 9000 --serial0 socket --vga serial0
# qm set 9000 --agent enabled=1
# Convert to template
qm template 9000
# Clean up
rm jammy-server-cloudimg-amd64.imgImportant Notes:
- The template creation must use the same storage that you'll configure in
terraform.tfvarsasvm_storage - The network bridge must match what you'll configure as
vm_network_bridge - The qemu-guest-agent will be installed automatically by the Terraform cloud-init configuration
See docs/PROXMOX_SETUP.md for detailed instructions and troubleshooting.
This section covers all configuration files you'll need to edit before deployment.
Configure Proxmox connection and VM specifications:
# Proxmox Connection
proxmox_host = "192.168.30.11:8006" # Your Proxmox host:port
proxmox_node = "proxmox-1" # Your Proxmox node name
proxmox_api_token_id = "root@pam!terraform"
proxmox_api_token_secret = "your-secret-token-here"
# VM Configuration
vm_name = "openclaw-vm"
vm_cores = 4
vm_memory = 8192
vm_disk_size = "100G"
# Storage and Network (CRITICAL: Must match your Proxmox setup)
vm_storage = "local-zfs" # From 'pvesm status'
vm_network_bridge = "vmbr30" # From 'ip link show'
template_name = "9000" # Your cloud-init template VM ID
# SSH Configuration
ssh_public_key_file = "~/.ssh/id_rsa.pub"
vm_user = "ubuntu"Important Notes:
- This project uses the
bpg/proxmoxTerraform provider (actively maintained) - Storage and network bridge names vary by Proxmox installation - verify yours first
- The cloud-init configuration automatically installs qemu-guest-agent
OpenClaw supports multiple AI providers with different authentication methods:
Option 1: GitHub Copilot+ Subscription (Recommended for heavy usage)
Use your GitHub Copilot+ subscription instead of paying per token:
# GitHub Copilot+ subscription
primary_ai_provider: "copilot"After deployment, run interactive OAuth setup to connect your subscription.
Option 2: API Keys (Pay-per-use)
# Use API credits (pay per token)
primary_ai_provider: "api_keys_only"
anthropic_api_key: "sk-ant-your-key-here" # Claude requires API key
openai_api_key: "sk-proj-your-key-here"
openclaw_default_model: "anthropic/claude-sonnet-4-6"Option 3: Mixed (Copilot + API Keys)
Use Copilot subscription for primary work and API keys as fallback:
primary_ai_provider: "copilot"
# Add API keys for other providers as needed
anthropic_api_key: "sk-ant-your-key-here" # Claude fallback
openai_api_key: "sk-proj-your-key-here" # OpenAI fallbackπ Complete setup guide: docs/AI_PROVIDER_SETUP.md
The Ansible playbook automatically configures API keys. For GitHub Copilot+, you'll complete an interactive OAuth flow after deployment.
π‘ Note: Claude Sonnet 4.6 is available through GitHub Copilot+ subscription at no extra cost!
Recommended Models for Coding (Cost vs Performance):
| Model | Cost | Use Case |
|---|---|---|
anthropic/claude-sonnet-4-6 |
$3-4/$12-15 per 1M tokens | Best value - Complex coding, refactoring, debugging |
anthropic/claude-haiku-3 |
$0.80/$4 per 1M tokens | Simple tasks, code reviews (75% cheaper) |
gpt-4o-mini |
$0.15/$0.60 per 1M tokens | Basic scripts, simple questions (90% cheaper) |
Enable web search, scraping, and other online tools:
# Add to ansible/inventory/group_vars/all.yml
braveapi_key: "BSAxxx..." # Brave Search API
serper_api_key: "xxx..." # Serper.dev Google Search API
firecrawl_api_key: "fc-xxx..." # Firecrawl web scrapingOr configure interactively on the VM:
ssh ubuntu@<vm-ip>
cd /opt/openclaw
docker compose exec openclaw-gateway openclaw configure --section webThese keys are also auto-configured in auth-profiles.json when set in Ansible.
For secure HTTPS access on your local network:
# Enable TLS with auto-generated self-signed certificate
openclaw_enable_tls: true
# OR provide your own certificate (e.g., from mkcert)
openclaw_enable_tls: true
openclaw_tls_cert_path: "/home/ubuntu/.openclaw/certs/cert.pem"
openclaw_tls_key_path: "/home/ubuntu/.openclaw/certs/key.pem"SSL Options:
- Self-signed (auto): Set
openclaw_enable_tls: true(browser warnings expected) - mkcert (recommended): Locally-trusted certificates, no warnings - see SSL Setup Guide
- Tailscale: Secure mesh network with built-in HTTPS
- Let's Encrypt: Production-ready if you have a domain
π Full SSL setup guide: docs/SSL_SETUP.md
Automatically back up your OpenClaw configuration and workspace metadata to a git repository.
What gets backed up:
- β
openclaw.json- Main configuration - β
devices/paired.json- Device pairing (no secrets) - β
.cursorrulesfiles - Your AI guardrails for each project - β API keys (never committed)
- β Workspace code (use per-project git repos)
Setup:
-
Create a private GitHub repo (e.g.,
openclaw-config-backup) -
Configure in
ansible/inventory/group_vars/all.yml:
openclaw_backup_repo: "git@github.com:yourusername/openclaw-config-backup.git"
# Optional: customize schedule (default: daily at 2 AM)
openclaw_backup_cron_hour: "2"- Deploy with Ansible (backup will run automatically)
Manual backup:
ssh ubuntu@<vm-ip>
~/openclaw-backups/backup.shInitialize workspace projects as git repos:
ssh ubuntu@<vm-ip>
~/bin/init-workspace-repos.shThis script will:
- Create git repos for each workspace project
- Add
.gitignoreandREADME.mdtemplates - Include your
.cursorrulesfiles - Make initial commits
Then push each project to its own GitHub repo!
π Full backup documentation: ansible/roles/openclaw-backup/README.md
After deployment, OpenClaw will be accessible at:
- HTTP:
http://<vm-ip>:18789(requires SSL for some features) - HTTPS:
https://<vm-ip>:18789(recommended)
-
Get the Gateway Token:
ssh ubuntu@<vm-ip> "grep -oP '\"token\":\s*\"\K[^\"]+' ~/.openclaw/openclaw.json"
-
Access with Token: Open in your browser:
https://<vm-ip>:18789/#token=YOUR_TOKEN_HERE -
Approve Device Pairing:
# List pending device pairing requests ssh ubuntu@<vm-ip> "cd /opt/openclaw && docker compose exec openclaw-gateway openclaw devices list" # Approve the pairing request (use the Request ID from above) ssh ubuntu@<vm-ip> "cd /opt/openclaw && docker compose exec openclaw-gateway openclaw devices approve <REQUEST_ID>"
-
Refresh Browser - Your device is now paired and authenticated!
# List all devices (pending and paired)
ssh ubuntu@<vm-ip> "cd /opt/openclaw && docker compose exec openclaw-gateway openclaw devices list"
# Remove a paired device
ssh ubuntu@<vm-ip> "cd /opt/openclaw && docker compose exec openclaw-gateway openclaw devices remove <DEVICE_ID>"
# Clear all paired devices
ssh ubuntu@<vm-ip> "cd /opt/openclaw && docker compose exec openclaw-gateway openclaw devices clear"For more commands, see docs/COMMON_COMMANDS.md
Once authenticated, you're ready to start using OpenClaw as your AI coding assistant!
π Complete usage guide: docs/GETTING_STARTED.md
Quick examples:
You: Clone https://github.com/username/myproject.git and list the files
You: Create a Python script that processes JSON files in the data/ directory
You: Add error handling to the main function in app.py
You: Run the tests and show me any failures
You: Create a new branch called 'feature/new-api' and add a REST endpoint
Setting up Git access:
Note: The Ansible deployment automatically mounts your ~/.ssh directory and .gitconfig from the VM into the OpenClaw container. Once you configure SSH keys on the VM, OpenClaw will immediately have access to them.
# SSH into the VM and configure git
ssh ubuntu@<vm-ip>
git config --global user.name "Your Name"
git config --global user.email "your@email.com"
# Set up SSH keys for GitHub/GitLab (recommended)
ssh-keygen -t ed25519 -C "your@email.com"
cat ~/.ssh/id_ed25519.pub # Add this to your Git provider
# Configure SSH for GitHub
cat >> ~/.ssh/config << 'EOF'
Host github.com
HostName github.com
User git
IdentityFile ~/.ssh/id_ed25519
IdentitiesOnly yes
EOF
chmod 600 ~/.ssh/config ~/.ssh/id_ed25519
# Test SSH connection
ssh -T git@github.com
# Restart OpenClaw to mount the SSH keys
ssh ubuntu@<vm-ip> "cd /opt/openclaw && docker compose restart openclaw-gateway".
βββ terraform/ # Proxmox VM provisioning
β βββ main.tf
β βββ variables.tf
β βββ outputs.tf
β βββ terraform.tfvars.example
βββ ansible/ # Configuration management
β βββ inventory/
β β βββ hosts.example
β β βββ group_vars/
β βββ playbooks/
β β βββ site.yml
β β βββ provision-vm.yml
β β βββ configure-system.yml
β β βββ deploy-openclaw.yml
β βββ roles/
β β βββ common/
β β βββ docker/
β β βββ openclaw/
β βββ ansible.cfg
βββ scripts/ # Helper scripts
βββ setup.sh
βββ deploy.sh
If you prefer to create the VM manually instead of using Terraform:
- Create Ubuntu 22.04 VM in Proxmox
- Note the IP address
- Ensure SSH access with your key
- Update
ansible/inventory/hostswith the IP - Run Ansible playbook
Error: "failed to open SSH client: unable to authenticate"
- Ensure you've set up SSH key authentication to Proxmox host:
ssh-copy-id root@your-proxmox-host - Verify your SSH key is loaded:
ssh-add -L - If empty, add your key:
ssh-add ~/.ssh/id_rsa
Error: "datastore does not support content type 'snippets'"
- Enable snippets on local storage:
ssh root@proxmox-host "pvesm set local --content backup,iso,vztmpl,snippets" - Verify:
ssh root@proxmox-host "pvesm status -content snippets"
Error: "storage 'local-lvm' does not exist" or similar
- Check your actual storage names:
ssh root@proxmox-host "pvesm status" - Update
vm_storageinterraform.tfvarswith the correct storage name (e.g., local-zfs)
Error: "bridge 'vmbr0' does not exist" or similar
- Check your network bridges:
ssh root@proxmox-host "ip link show | grep vmbr" - Update
vm_network_bridgeinterraform.tfvarswith your bridge name
Error: "timeout while waiting for the QEMU agent"
- The guest agent is being installed via cloud-init and takes ~30-60 seconds after VM creation
- Check cloud-init status:
ssh root@proxmox-host "qm guest exec VM_ID -- cloud-init status" - Verify agent is running:
ssh root@proxmox-host "qm agent VM_ID ping"
SSH authentication fails to newly created VM
- The Terraform configuration uses cloud-init to install your SSH key
- Verify the key in terraform.tfvars matches your actual key:
cat ~/.ssh/id_rsa.pub - Check cloud-init completed:
ssh root@proxmox-host "qm guest exec VM_ID -- tail /var/log/cloud-init-output.log"
- Verify SSH access:
ssh ubuntu@<vm-ip> - Check Python is installed on target:
ansible all -m ping -i inventory/hosts - Run with verbose:
ansible-playbook -vvv -i inventory/hosts playbooks/site.yml - Ensure Ansible is installed:
brew install ansible(macOS) orpip3 install ansible
OpenClaw says "No git credentials configured"
The Ansible playbook automatically mounts your VM's ~/.ssh directory into the OpenClaw container. If git authentication isn't working:
-
Verify SSH keys are configured on the VM:
ssh ubuntu@<vm-ip> ls -la ~/.ssh/id_ed25519* # Should exist cat ~/.ssh/config # Should have GitHub/GitLab config ssh -T git@github.com # Test authentication
-
Restart OpenClaw to mount SSH keys:
ssh ubuntu@<vm-ip> "cd /opt/openclaw && docker compose restart openclaw-gateway"
-
Verify SSH keys are visible inside container:
ssh ubuntu@<vm-ip> "docker exec openclaw-openclaw-gateway-1 ls -la /home/node/.ssh/" ssh ubuntu@<vm-ip> "docker exec openclaw-openclaw-gateway-1 ssh -T git@github.com"
-
Check docker-compose override exists:
ssh ubuntu@<vm-ip> "cat /opt/openclaw/docker-compose.override.yml" # Should show SSH and .gitconfig volume mounts
If SSH keys still aren't working, the Ansible playbook creates docker-compose.override.yml that mounts:
/home/ubuntu/.ssh:/home/node/.ssh:ro(SSH keys)/home/ubuntu/.gitconfig:/home/node/.gitconfig:ro(Git config)
If Terraform doesn't show the IP address (guest agent not ready), get it manually:
# Via guest agent (preferred):
ssh root@proxmox-host "qm agent VM_ID network-get-interfaces" | grep '"ip-address"' | grep 192.168
# Via Proxmox CLI:
ssh root@proxmox-host "qm list"
# Then check DHCP leases or Proxmox web UI- Never commit
terraform.tfvarsor files with secrets - Use Ansible Vault for sensitive variables
- Restrict API token permissions to minimum required
- Use SSH keys, not passwords
- The Terraform configuration uses SSH key authentication for VM access
- SSH keys are automatically installed via cloud-init during VM creation
The Terraform configuration creates a custom cloud-init user data file that:
- Creates the ubuntu user with sudo access
- Installs your SSH public key for authentication
- Installs and enables qemu-guest-agent for VM management
- Configures the system for first boot
This approach ensures VMs are fully configured and accessible immediately after creation, with no manual intervention required.
This project uses the bpg/proxmox Terraform provider instead of the older Telmate provider because:
- Actively maintained and up-to-date
- Better support for modern Proxmox versions
- Improved cloud-init integration
- More reliable guest agent interaction
- AI Provider Setup - Configure GitHub Copilot+, Claude Pro/Max, or API keys
- Getting Started Guide - Using OpenClaw as your AI assistant
- Common Commands - Quick reference for frequent tasks
- SSL Setup Guide - Configure HTTPS with various methods
- Proxmox Setup - Detailed Proxmox configuration
| Task | Command |
|---|---|
| Get gateway token | ssh ubuntu@<vm-ip> "grep -oP '\"token\":\s*\"\K[^\"]+' ~/.openclaw/openclaw.json" |
| List devices | ssh ubuntu@<vm-ip> "cd /opt/openclaw && docker compose exec openclaw-gateway openclaw devices list" |
| Approve device | ssh ubuntu@<vm-ip> "cd /opt/openclaw && docker compose exec openclaw-gateway openclaw devices approve <REQUEST_ID>" |
| View logs | ssh ubuntu@<vm-ip> "docker logs openclaw-openclaw-gateway-1 -f" |
| Restart OpenClaw | ssh ubuntu@<vm-ip> "cd /opt/openclaw && docker compose restart" |
| Check health | ssh ubuntu@<vm-ip> "cd /opt/openclaw && docker compose exec openclaw-gateway openclaw health" |
Contributions welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
See CONTRIBUTING.md for more details.
This project is licensed under the MIT License - see the LICENSE file for details.
- OpenClaw - The AI coding assistant this deploys
- Proxmox VE - Virtualization platform
- bpg/proxmox - Terraform provider
- π Check the documentation
- π Open an issue
- π¬ Share your experience
Made with π¦ by Brian Totharye