Skip to content

gamersalpha/local-ai-packaged

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

88 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🧠 Local AI Packaged

⚠️ Note: This project is a maintained and extended fork of coleam00/local-ai-packaged, which no longer appears to be actively maintained. This version includes up‑to‑date dependencies, bug fixes, and full automation scripts for setup and management.


🌍 Overview

Local AI Packaged provides a full self‑hosted AI environment using Docker Compose. It bundles everything you need for local LLM workflows, including:

  • 🧠 Ollama β€” Local LLM inference (CPU / NVIDIA / AMD)
  • πŸ’¬ Open WebUI β€” Chat interface for your models and n8n agents
  • βš™οΈ n8n β€” Workflow automation with Redis queue mode support
  • 🌊 Flowise β€” Low-code AI chain builder
  • 🧱 Supabase β€” Database, vector store, and auth system (optional)
  • πŸ“Š Langfuse β€” LLM tracing and observability
  • πŸ“¦ Qdrant β€” Vector database for RAG
  • πŸ•ΈοΈ Neo4j β€” Graph database
  • πŸ” SearXNG β€” Web search engine for RAG pipelines
  • πŸ”§ Unsloth β€” LLM fine-tuning studio (NVIDIA GPU required, ~20 GB image)
  • πŸ”’ Caddy or SWAG β€” Reverse proxy with auto HTTPS (optional)

This version is designed for technical self‑hosters running the stack on a home server, NAS (e.g. Synology), or any Linux/Windows host.


βš™οΈ Prerequisites

  • 🐳 Docker and Docker Compose v2+
  • 🐍 Python 3.8+
  • πŸ’Ύ Git
  • πŸ’‘ At least 16 GB RAM recommended

GPU (optional)


πŸš€ Quick Start

β‘  Clone the repository

git clone https://github.com/gamersalpha/local-ai-packaged.git
cd local-ai-packaged

β‘‘ Generate the .env automatically

python3 generate_env.py --yes --regen-sensitive

Pour choisir votre domaine de base (les sous-domaines de chaque service seront dΓ©rivΓ©s automatiquement) :

# Via l'option --domain
python3 generate_env.py --yes --regen-sensitive --domain home.example.com

# Ou en mode interactif (sans --yes), le script demandera le domaine
python3 generate_env.py --regen-sensitive

Features:

  • Generates secure random secrets
  • Detects GPU type (NVIDIA / AMD / CPU)
  • Sets correct local paths for volumes
  • Configures BASE_DOMAIN interactively or via --domain flag
  • Expands BASE_DOMAIN into per-service hostnames automatically

β‘’ Deploy the stack

Interactive setup wizard (recommended for first time):

python3 start_services.py --setup

Or direct launch:

python3 start_services.py --profile cpu --no-supabase --no-caddy

🧩 Managing the Stack

▢️ start_services.py

The main orchestrator with service selection, proxy detection, and interactive wizard.

python3 start_services.py [options]

Options

Option Description
--setup Interactive setup wizard with service picker
--profile [cpu|gpu-nvidia|gpu-amd] Select hardware profile
--environment [private|public] Network mode (localhost-only or LAN)
--services [names...] Select specific services to deploy
--proxy [caddy|swag|none|auto] Reverse proxy type (auto-detects SWAG)
--no-supabase Skip Supabase
--no-caddy Skip Caddy reverse proxy
--update Pull latest Docker images before start
--dry-run Preview configuration only

Selectable Services

Use --services to deploy only what you need. Dependencies are resolved automatically (e.g. n8n will also start postgres and redis).

# Deploy only n8n, Open WebUI, and Ollama
python3 start_services.py --services n8n openwebui ollama

# Deploy everything (default)
python3 start_services.py --services all

Available: n8n, openwebui, flowise, qdrant, neo4j, langfuse, searxng, ollama, unsloth

Examples

# Interactive wizard
python3 start_services.py --setup

# CPU, all services, no reverse proxy
python3 start_services.py --profile cpu --no-supabase --no-caddy

# NVIDIA GPU, public network with SWAG proxy
python3 start_services.py --profile gpu-nvidia --environment public --proxy swag

# Dry-run to preview what would be deployed
python3 start_services.py --services n8n openwebui ollama --dry-run

# Pull latest images and restart
python3 start_services.py --update

πŸ’‘ What it does:

  • Validates or creates your .env
  • Clones Supabase's official Docker stack if missing
  • Toggles Caddy and Supabase dynamically in docker-compose.yml
  • Generates a new secret key for SearXNG
  • Auto-detects SWAG reverse proxy and installs nginx configs
  • Resolves service dependencies automatically
  • Stops existing containers before redeploy
  • Starts Supabase first (if enabled), then the Local AI stack

♻️ Update all services

./update_services.sh [profile]

Examples:

./update_services.sh cpu
./update_services.sh gpu-nvidia

This will stop all containers, pull latest images, and restart the stack.


🌐 Access Your Services

Service Description Default URL (private)
n8n Workflow automation http://localhost:5678
Open WebUI Chat interface for LLMs http://localhost:8080
Ollama Local LLM API http://localhost:11434
Flowise Low-code AI builder http://localhost:3001
Langfuse LLM tracing dashboard http://localhost:3000
SearXNG Web search for RAG http://localhost:8081
Neo4j Graph database browser http://localhost:7474
Qdrant Vector database API http://localhost:6333
PostgreSQL Shared database localhost:5433
Unsloth LLM fine-tuning studio (~20 GB image, long pull) http://localhost:8888
Supabase DB, Auth & API (optional) http://localhost:54323

In private mode, all ports are bound to 127.0.0.1. In public mode, they are accessible on all interfaces.


🌐 Domain Configuration

Set BASE_DOMAIN in your .env to auto-derive all service hostnames:

BASE_DOMAIN=home.example.com

This generates the following subdomains (Γ  crΓ©er en tant que CNAME ou A record dans votre DNS) :

Sous-domaine Service Port interne
hub.BASE_DOMAIN Landing page (dashboard) 8090
n8n.BASE_DOMAIN n8n 5678
openwebui.BASE_DOMAIN Open WebUI 8080
flowise.BASE_DOMAIN Flowise 3001
langfuse.BASE_DOMAIN Langfuse 3000
searxng.BASE_DOMAIN SearXNG 8081
ollama.BASE_DOMAIN Ollama 11434
qdrant.BASE_DOMAIN Qdrant 6333
neo4j.BASE_DOMAIN Neo4j 7474
unsloth.BASE_DOMAIN Unsloth 8888
supabase.BASE_DOMAIN Supabase 8000

Exemple : avec BASE_DOMAIN=home.example.com, crΓ©ez un wildcard DNS *.home.example.com pointant vers l'IP de votre serveur, ou ajoutez chaque sous-domaine individuellement.

Override individual services:

N8N_HOSTNAME=custom-n8n.mydomain.com

πŸ”’ Reverse Proxy

Caddy (built-in)

Enabled by default in public mode. Auto-generates Let's Encrypt certificates.

SWAG (auto-detected)

If you already run SWAG on your server (Synology, Unraid, etc.), start_services.py detects it automatically and installs nginx proxy configs from the swag/ directory.

python3 start_services.py --proxy swag
# or let it auto-detect:
python3 start_services.py --proxy auto

⚑ n8n Queue Mode

For heavy workflows, enable Redis-backed queue mode with separate worker containers:

# In .env:
N8N_EXECUTIONS_MODE=queue

Then start with the worker profile:

python3 start_services.py --profile cpu
docker compose -p localai --profile n8n-worker up -d

πŸ—„οΈ Database Isolation

Each service uses its own PostgreSQL database to prevent schema collisions:

Service Database
n8n n8n
Langfuse langfuse
Flowise flowise
Supabase postgres

Databases are auto-created on first start via postgres/init/01-create-databases.sql.


πŸ› οΈ Troubleshooting

Supabase fails to start

  • Check that the folder supabase/ was created automatically
  • Delete it and rerun: python3 start_services.py
  • Ensure .env contains POOLER_DB_POOL_SIZE=5

GPU not detected

  • Ensure the NVIDIA Container Toolkit or ROCm is installed correctly
  • Fallback to CPU: python3 start_services.py --profile cpu

Ports already in use

  • Check what's using the port: netstat -tlnp | grep <port>
  • Edit docker-compose.override.private.yml to change exposed ports

Unsloth image takes forever to pull

  • The unsloth/unsloth:latest image is ~20 GB (CUDA + PyTorch). First pull can take 10-30 minutes depending on your connection.
  • You can pull it in the background: docker pull unsloth/unsloth:latest
  • Deploy all other services first without Unsloth, then add it later with: docker compose -p localai --profile gpu-nvidia -f docker-compose.yml -f docker-compose.override.private.yml up -d unsloth

Container healthcheck failing

  • All healthchecks use 127.0.0.1 (not localhost) to avoid IPv6 issues
  • Check logs: docker logs <container_name>

🧾 Project Structure

.
β”œβ”€β”€ docker-compose.yml                 # Main orchestration file
β”œβ”€β”€ docker-compose.override.private.yml # Localhost-only port bindings
β”œβ”€β”€ docker-compose.override.public.yml  # LAN-accessible port bindings
β”œβ”€β”€ start_services.py                  # Smart deployment launcher
β”œβ”€β”€ generate_env.py                    # .env generator with GPU detection
β”œβ”€β”€ update_services.sh                 # Container update helper
β”œβ”€β”€ Caddyfile                          # Caddy reverse proxy config
β”œβ”€β”€ .env.example                       # Environment template
β”œβ”€β”€ n8n_pipe.py                        # Open WebUI β†’ n8n integration pipe
β”œβ”€β”€ postgres/
β”‚   └── init/                          # Database init scripts
β”œβ”€β”€ swag/                              # SWAG proxy-conf templates
β”œβ”€β”€ n8n/
β”‚   └── backup/                        # Pre-built n8n workflows
β”œβ”€β”€ flowise/                           # Flowise chatflows & custom tools
β”œβ”€β”€ searxng/                           # SearXNG configuration
β”œβ”€β”€ supabase/                          # Auto-cloned (gitignored)
β”œβ”€β”€ shared/                            # Shared data volume (gitignored)
└── neo4j/                             # Neo4j data (gitignored)

πŸ“‹ Roadmap / TODO

πŸ”œ Planned Features

  • Landing page dashboard β€” Page d'accueil Cyberpunk 2077 avec icΓ΄nes, descriptions et liens dynamiques vers chaque service installΓ© (hub.BASE_DOMAIN / port 8090)
  • Environment-based versioning β€” Mode prod (versions pinnΓ©es) vs recette (latest) avec granularitΓ© par service. Permettre --env prod / --env recette dans start_services.py
  • Supabase rework β€” Refaire l'intΓ©gration Supabase pour plus de fiabilitΓ© et de modularitΓ© (clone, .env partagΓ©, sΓ©lection de services)
  • Monitoring & alerting β€” IntΓ©gration Prometheus/Grafana pour surveiller l'Γ©tat des services
  • Backup automation β€” Script de backup automatique des volumes Docker et bases de donnΓ©es
  • Multi-node support β€” Support Docker Swarm ou Kubernetes pour dΓ©ploiement multi-serveur

βœ… Recently Completed

  • Selective service deployment (--services)
  • Interactive setup wizard (--setup)
  • SWAG reverse proxy auto-detection and config generation
  • Global BASE_DOMAIN with per-service hostname derivation
  • n8n v2 + Redis queue mode + worker profile
  • Database isolation (separate PostgreSQL databases per service)
  • Healthchecks on all services (IPv4-safe)
  • Unified logging with rotation on all containers
  • Dynamic update_services.sh with profile argument
  • Redis authentication (--requirepass)

πŸ“œ License

Licensed under the Apache 2.0 License. See LICENSE for details.


Built and maintained with ❀️ for the self‑hosting community.

About

Run all your local AI together in one package - Ollama, Supabase, n8n, Open WebUI, and more!

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 76.3%
  • HTML 20.1%
  • Shell 3.2%
  • Dockerfile 0.4%