This guide walks you through deploying Moltagent on Hetzner infrastructure. The recommended setup uses three components: a managed Nextcloud (Storage Share), a Bot VM, and an Ollama VM.
Time required: approximately 30-60 minutes for someone comfortable with server administration.
Monthly cost: starting at ~30 Euro/month.
- A Hetzner Cloud account (console.hetzner.cloud)
- SSH key pair for server access
- At least one LLM API key (Anthropic, OpenAI, DeepSeek, etc.) for cloud-assisted mode, or none for local-only mode
- Basic familiarity with Linux, systemd, and SSH
- Go to Hetzner Robot
- Order a Storage Share (BX11, 100GB is sufficient, ~5 Euro/month)
- Choose a datacenter close to you (Falkenstein recommended for EU)
- Wait for the provisioning email (usually under 1 hour)
- Note your Storage Share URL, admin username, and password
- In Hetzner Cloud Console, create a new server:
- Image: Ubuntu 24.04
- Type: CPX22 (3 vCPU, 4GB RAM) or larger
- Location: same datacenter as your Storage Share
- SSH key: add your public key
- Note the server's IPv4 address
- Create another server:
- Image: Ubuntu 24.04
- Type: CPX31 (8 vCPU, 16GB RAM) or larger
- Location: same datacenter
- SSH key: same key
- Note the server's IPv4 address
-
Log into your Nextcloud admin panel
-
Install required apps: Passwords, Deck, Collectives, Talk, Mail, Calendar, Contacts
-
Create the
moltagentuser via the Nextcloud admin panel (Settings → Users). On a managed Storage Share,occis not available — use the web interface instead. -
Create the agent's folder structure:
# Create these via the Nextcloud Files web UI or WebDAV.
# On a managed Storage Share, occ is not available.
# Folder names are case-sensitive.# Via WebDAV (replace NC_URL, NC_USER, NC_PASS):
for dir in Moltagent Moltagent/Inbox Moltagent/Outbox Moltagent/Logs Moltagent/Memory Moltagent/SkillTemplates; do
curl -u "$NC_USER:$NC_PASS" -X MKCOL "https://$NC_URL/remote.php/dav/files/$NC_USER/$dir"
done- Store your LLM API keys in NC Passwords:
- Create entries named
claude-api-key,deepseek-api-key, etc. - Share each entry with the
moltagentuser
- Create entries named
SSH into the Ollama VM:
ssh root@<OLLAMA_IP>Install Ollama and pull models:
curl -fsSL https://ollama.com/install.sh | sh
ollama pull qwen3:8b # General-purpose reasoning
ollama pull qwen2.5:3b # Fast classification
ollama pull nomic-embed-text # Embeddings for semantic searchConfigure Ollama to listen on the private network:
# Edit /etc/systemd/system/ollama.service.d/override.conf
[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"Critical: block all outbound internet access on this VM.
ufw default deny outgoing
ufw default deny incoming
ufw allow in from <BOT_VM_IP> to any port 11434
ufw enableSSH into the Bot VM:
ssh root@<BOT_IP>Install Node.js and clone the repository:
# Node.js 22.x LTS recommended (minimum: 18.x)
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
apt-get install -y nodejs
git clone https://github.com/moltagent/moltagent.git /opt/moltagent
cd /opt/moltagent
npm install --productionmkdir -p /etc/credstore
# Store the moltagent user's Nextcloud password
echo -n "YOUR_MOLTAGENT_NC_PASSWORD" > /etc/credstore/moltagent-nc-password
chmod 600 /etc/credstore/moltagent-nc-password# Edit the provider configuration with your Ollama VM IP and provider preferences
nano config/moltagent-providers.yamlcp deploy/moltagent.service /etc/systemd/system/
# Edit the service file to set your NC_URL, NC_USER, OLLAMA_URL
nano /etc/systemd/system/moltagent.service
systemctl daemon-reload
systemctl enable moltagent
systemctl start moltagentufw default deny outgoing
ufw default deny incoming
ufw allow ssh
# Allow inbound webhook from Nextcloud Storage Share
ufw allow in from <NC_STORAGE_SHARE_IP> to any port 3000
ufw allow out to <NC_STORAGE_SHARE_IP> port 443
ufw allow out to <OLLAMA_IP> port 11434
# Allow cloud LLM APIs (skip for local-only mode)
ufw allow out to api.anthropic.com port 443
ufw allow out to api.openai.com port 443
ufw allow out to api.deepseek.com port 443
ufw enableHetzner Storage Share does not allow running arbitrary OCC commands. You need to file a support ticket.
On the Bot VM:
# Generate 128-character hex secret
openssl rand -hex 64Save this output. You will need it in two places:
- Store it as
nc-talk-secretin NC Passwords and share with themoltagentuser - Include it in the Hetzner support ticket below
File a support ticket at Hetzner with the following:
Subject: Enable NC Talk Bot for Storage Share nxXXXXX
Please run the following OCC command:
sudo -u www-data php occ talk:bot:install \
--feature=webhook \
--feature=response \
"Moltagent" \
"<YOUR_128_CHAR_SECRET>" \
"http://<BOT_VM_IP>:3000/webhook/nctalk" \
"Moltagent AI Assistant"
Thank you.
Replace <YOUR_128_CHAR_SECRET> with the secret you generated and <BOT_VM_IP> with your Bot VM's IPv4 address. Hetzner support typically responds within a few hours.
Once Hetzner confirms the bot is registered:
- Create a new Talk room in Nextcloud (or use an existing one)
- Add the Moltagent bot to the room
- Note the room token from the URL (the part after
/call/) - Ensure your Bot VM firewall allows inbound connections from the Storage Share IP on port 3000
Check that the service is running:
systemctl status moltagent
journalctl -u moltagent -fSend a message in the Talk room. If the webhook is configured correctly, the bot will respond.
Check the public dashboard architecture view for a reference of what a healthy system looks like.
For development and testing, you can run everything on a single machine:
- Install Ollama locally
- Point the config at a Nextcloud instance (can be a test Storage Share or local Nextcloud)
- Run
npm testto verify the test suite - Run the agent directly with
node webhook-server.js
This skips network isolation and is not suitable for production, but it's sufficient for development and contribution testing.
- Deployment Guide - SearXNG, Speaches, email, credentials, full setup
- Architecture - understand the three-VM isolation model
- Security Model - trust boundaries and credential brokering
- Configuration - full reference for all config options
- LLM Providers - provider adapters and job routing