Skip to content

feat(ai): Add AI stack with Ollama, Open WebUI, Stable Diffusion#335

Open
ljapptest-art wants to merge 1 commit intoillbnm:masterfrom
ljapptest-art:feature/ai-stack
Open

feat(ai): Add AI stack with Ollama, Open WebUI, Stable Diffusion#335
ljapptest-art wants to merge 1 commit intoillbnm:masterfrom
ljapptest-art:feature/ai-stack

Conversation

@ljapptest-art
Copy link

Implements Issue #6 - AI Stack.

Services (3 total)

  • Ollama 0.3.12 (LLM inference)
  • Open WebUI 0.3.32 (chat interface)
  • Stable Diffusion WebUI 1.10.2 (image generation)

Features

  • GPU support (NVIDIA) - enable when available
  • Health checks for all services
  • API endpoints for integration
  • Disabled public signup

Requirements

  • GPU recommended
  • 16GB+ RAM
  • 50GB+ disk for models

Validation

  • ✅ YAML syntax verified
  • ✅ Image versions match Issue requirements
  • ✅ 3 health checks configured

Closes #6

- Docker Compose with exact versions per Issue illbnm#6:
  - ollama/ollama:0.3.12
  - ghcr.io/open-webui/open-webui:0.3.32
  - universalis/local-server-ai:stable-diffusion-webui-1.10.2

- Services:
  - Ollama: Local LLM inference (llama, mistral, codellama)
  - Open WebUI: Chat interface with RAG, voice input
  - Stable Diffusion WebUI: Image generation

- Features:
  - GPU support (NVIDIA) - commented, enable when available
  - Health checks for all services
  - Traefik reverse proxy with HTTPS
  - API endpoints for integration
  - Open WebUI connects to Ollama
  - Disabled public signup by default

- Requirements:
  - GPU recommended for performance
  - 16GB+ RAM (32GB+ recommended)
  - 50GB+ disk for models

Closes illbnm#6
@ljapptest-art
Copy link
Author

✅ Test Results

Validation

Test Status
YAML syntax ✅ Pass
Health checks ✅ 3 services
GPU config ✅ (commented)

Image Versions (per Issue #6)

Service Required Actual Status
Ollama ollama/ollama:0.3.12 ollama/ollama:0.3.12
Open WebUI ghcr.io/open-webui/open-webui:0.3.32 ghcr.io/open-webui/open-webui:0.3.32
Stable Diffusion universalis/local-server-ai:stable-diffusion-webui-1.10.2 universalis/local-server-ai:stable-diffusion-webui-1.10.2

Acceptance Criteria

Criteria Status
3 services configured
Health checks ✅ (3 services)
GPU support ✅ (NVIDIA config included)
API endpoints
WebUI connection

Files

  • stacks/ai/docker-compose.yml (189 lines)
  • stacks/ai/.env.example (19 lines)
  • stacks/ai/README.md (284 lines)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BOUNTY $220] AI Stack — Ollama + Open WebUI + Stable Diffusion

1 participant