β οΈ Note: This project is a maintained and extended fork of coleam00/local-ai-packaged, which no longer appears to be actively maintained. This version includes upβtoβdate dependencies, bug fixes, and full automation scripts for setup and management.
Local AI Packaged provides a full selfβhosted AI environment using Docker Compose. It bundles everything you need for local LLM workflows, including:
- π§ Ollama β Local LLM inference (CPU / NVIDIA / AMD)
- π¬ Open WebUI β Chat interface for your models and n8n agents
- βοΈ n8n β Workflow automation with Redis queue mode support
- π Flowise β Low-code AI chain builder
- π§± Supabase β Database, vector store, and auth system (optional)
- π Langfuse β LLM tracing and observability
- π¦ Qdrant β Vector database for RAG
- πΈοΈ Neo4j β Graph database
- π SearXNG β Web search engine for RAG pipelines
- π§ Unsloth β LLM fine-tuning studio (NVIDIA GPU required, ~20 GB image)
- π Caddy or SWAG β Reverse proxy with auto HTTPS (optional)
This version is designed for technical selfβhosters running the stack on a home server, NAS (e.g. Synology), or any Linux/Windows host.
- π³ Docker and Docker Compose v2+
- π Python 3.8+
- πΎ Git
- π‘ At least 16 GB RAM recommended
- NVIDIA: Install NVIDIA Container Toolkit
- AMD: ROCm runtime configured
- CPU only: Works fine, just slower
git clone https://github.com/gamersalpha/local-ai-packaged.git
cd local-ai-packagedpython3 generate_env.py --yes --regen-sensitivePour choisir votre domaine de base (les sous-domaines de chaque service seront dΓ©rivΓ©s automatiquement) :
# Via l'option --domain
python3 generate_env.py --yes --regen-sensitive --domain home.example.com
# Ou en mode interactif (sans --yes), le script demandera le domaine
python3 generate_env.py --regen-sensitiveFeatures:
- Generates secure random secrets
- Detects GPU type (NVIDIA / AMD / CPU)
- Sets correct local paths for volumes
- Configures
BASE_DOMAINinteractively or via--domainflag - Expands
BASE_DOMAINinto per-service hostnames automatically
Interactive setup wizard (recommended for first time):
python3 start_services.py --setupOr direct launch:
python3 start_services.py --profile cpu --no-supabase --no-caddyThe main orchestrator with service selection, proxy detection, and interactive wizard.
python3 start_services.py [options]| Option | Description |
|---|---|
--setup |
Interactive setup wizard with service picker |
--profile [cpu|gpu-nvidia|gpu-amd] |
Select hardware profile |
--environment [private|public] |
Network mode (localhost-only or LAN) |
--services [names...] |
Select specific services to deploy |
--proxy [caddy|swag|none|auto] |
Reverse proxy type (auto-detects SWAG) |
--no-supabase |
Skip Supabase |
--no-caddy |
Skip Caddy reverse proxy |
--update |
Pull latest Docker images before start |
--dry-run |
Preview configuration only |
Use --services to deploy only what you need. Dependencies are resolved automatically (e.g. n8n will also start postgres and redis).
# Deploy only n8n, Open WebUI, and Ollama
python3 start_services.py --services n8n openwebui ollama
# Deploy everything (default)
python3 start_services.py --services allAvailable: n8n, openwebui, flowise, qdrant, neo4j, langfuse, searxng, ollama, unsloth
# Interactive wizard
python3 start_services.py --setup
# CPU, all services, no reverse proxy
python3 start_services.py --profile cpu --no-supabase --no-caddy
# NVIDIA GPU, public network with SWAG proxy
python3 start_services.py --profile gpu-nvidia --environment public --proxy swag
# Dry-run to preview what would be deployed
python3 start_services.py --services n8n openwebui ollama --dry-run
# Pull latest images and restart
python3 start_services.py --updateπ‘ What it does:
- Validates or creates your
.env - Clones Supabase's official Docker stack if missing
- Toggles Caddy and Supabase dynamically in
docker-compose.yml - Generates a new secret key for SearXNG
- Auto-detects SWAG reverse proxy and installs nginx configs
- Resolves service dependencies automatically
- Stops existing containers before redeploy
- Starts Supabase first (if enabled), then the Local AI stack
./update_services.sh [profile]Examples:
./update_services.sh cpu
./update_services.sh gpu-nvidiaThis will stop all containers, pull latest images, and restart the stack.
| Service | Description | Default URL (private) |
|---|---|---|
| n8n | Workflow automation | http://localhost:5678 |
| Open WebUI | Chat interface for LLMs | http://localhost:8080 |
| Ollama | Local LLM API | http://localhost:11434 |
| Flowise | Low-code AI builder | http://localhost:3001 |
| Langfuse | LLM tracing dashboard | http://localhost:3000 |
| SearXNG | Web search for RAG | http://localhost:8081 |
| Neo4j | Graph database browser | http://localhost:7474 |
| Qdrant | Vector database API | http://localhost:6333 |
| PostgreSQL | Shared database | localhost:5433 |
| Unsloth | LLM fine-tuning studio (~20 GB image, long pull) | http://localhost:8888 |
| Supabase | DB, Auth & API (optional) | http://localhost:54323 |
In private mode, all ports are bound to
127.0.0.1. In public mode, they are accessible on all interfaces.
Set BASE_DOMAIN in your .env to auto-derive all service hostnames:
BASE_DOMAIN=home.example.comThis generates the following subdomains (Γ crΓ©er en tant que CNAME ou A record dans votre DNS) :
| Sous-domaine | Service | Port interne |
|---|---|---|
hub.BASE_DOMAIN |
Landing page (dashboard) | 8090 |
n8n.BASE_DOMAIN |
n8n | 5678 |
openwebui.BASE_DOMAIN |
Open WebUI | 8080 |
flowise.BASE_DOMAIN |
Flowise | 3001 |
langfuse.BASE_DOMAIN |
Langfuse | 3000 |
searxng.BASE_DOMAIN |
SearXNG | 8081 |
ollama.BASE_DOMAIN |
Ollama | 11434 |
qdrant.BASE_DOMAIN |
Qdrant | 6333 |
neo4j.BASE_DOMAIN |
Neo4j | 7474 |
unsloth.BASE_DOMAIN |
Unsloth | 8888 |
supabase.BASE_DOMAIN |
Supabase | 8000 |
Exemple : avec
BASE_DOMAIN=home.example.com, crΓ©ez un wildcard DNS*.home.example.compointant vers l'IP de votre serveur, ou ajoutez chaque sous-domaine individuellement.
Override individual services:
N8N_HOSTNAME=custom-n8n.mydomain.comEnabled by default in public mode. Auto-generates Let's Encrypt certificates.
If you already run SWAG on your server (Synology, Unraid, etc.), start_services.py detects it automatically and installs nginx proxy configs from the swag/ directory.
python3 start_services.py --proxy swag
# or let it auto-detect:
python3 start_services.py --proxy autoFor heavy workflows, enable Redis-backed queue mode with separate worker containers:
# In .env:
N8N_EXECUTIONS_MODE=queueThen start with the worker profile:
python3 start_services.py --profile cpu
docker compose -p localai --profile n8n-worker up -dEach service uses its own PostgreSQL database to prevent schema collisions:
| Service | Database |
|---|---|
| n8n | n8n |
| Langfuse | langfuse |
| Flowise | flowise |
| Supabase | postgres |
Databases are auto-created on first start via postgres/init/01-create-databases.sql.
- Check that the folder
supabase/was created automatically - Delete it and rerun:
python3 start_services.py - Ensure
.envcontainsPOOLER_DB_POOL_SIZE=5
- Ensure the NVIDIA Container Toolkit or ROCm is installed correctly
- Fallback to CPU:
python3 start_services.py --profile cpu
- Check what's using the port:
netstat -tlnp | grep <port> - Edit
docker-compose.override.private.ymlto change exposed ports
- The
unsloth/unsloth:latestimage is ~20 GB (CUDA + PyTorch). First pull can take 10-30 minutes depending on your connection. - You can pull it in the background:
docker pull unsloth/unsloth:latest - Deploy all other services first without Unsloth, then add it later with:
docker compose -p localai --profile gpu-nvidia -f docker-compose.yml -f docker-compose.override.private.yml up -d unsloth
- All healthchecks use
127.0.0.1(notlocalhost) to avoid IPv6 issues - Check logs:
docker logs <container_name>
.
βββ docker-compose.yml # Main orchestration file
βββ docker-compose.override.private.yml # Localhost-only port bindings
βββ docker-compose.override.public.yml # LAN-accessible port bindings
βββ start_services.py # Smart deployment launcher
βββ generate_env.py # .env generator with GPU detection
βββ update_services.sh # Container update helper
βββ Caddyfile # Caddy reverse proxy config
βββ .env.example # Environment template
βββ n8n_pipe.py # Open WebUI β n8n integration pipe
βββ postgres/
β βββ init/ # Database init scripts
βββ swag/ # SWAG proxy-conf templates
βββ n8n/
β βββ backup/ # Pre-built n8n workflows
βββ flowise/ # Flowise chatflows & custom tools
βββ searxng/ # SearXNG configuration
βββ supabase/ # Auto-cloned (gitignored)
βββ shared/ # Shared data volume (gitignored)
βββ neo4j/ # Neo4j data (gitignored)
- Landing page dashboard β Page d'accueil Cyberpunk 2077 avec icΓ΄nes, descriptions et liens dynamiques vers chaque service installΓ© (
hub.BASE_DOMAIN/ port 8090) - Environment-based versioning β Mode
prod(versions pinnΓ©es) vsrecette(latest) avec granularitΓ© par service. Permettre--env prod/--env recettedansstart_services.py - Supabase rework β Refaire l'intΓ©gration Supabase pour plus de fiabilitΓ© et de modularitΓ© (clone, .env partagΓ©, sΓ©lection de services)
- Monitoring & alerting β IntΓ©gration Prometheus/Grafana pour surveiller l'Γ©tat des services
- Backup automation β Script de backup automatique des volumes Docker et bases de donnΓ©es
- Multi-node support β Support Docker Swarm ou Kubernetes pour dΓ©ploiement multi-serveur
- Selective service deployment (
--services) - Interactive setup wizard (
--setup) - SWAG reverse proxy auto-detection and config generation
- Global
BASE_DOMAINwith per-service hostname derivation - n8n v2 + Redis queue mode + worker profile
- Database isolation (separate PostgreSQL databases per service)
- Healthchecks on all services (IPv4-safe)
- Unified logging with rotation on all containers
- Dynamic
update_services.shwith profile argument - Redis authentication (
--requirepass)
Licensed under the Apache 2.0 License. See LICENSE for details.
Built and maintained with β€οΈ for the selfβhosting community.