Skip to content

Commit 22a76b1

Browse files
author
Android-16
committed
feat: add AnythingLLM extension for Wave 2 (Light-Heart-Labs#12) - RAG document chat
1 parent de96aaf commit 22a76b1

3 files changed

Lines changed: 188 additions & 0 deletions

File tree

Lines changed: 96 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
# AnythingLLM Extension
2+
3+
All-in-one AI productivity tool with RAG for Dream Server.
4+
5+
## What It Is
6+
7+
AnythingLLM lets you chat with your documents using AI:
8+
- Upload PDFs, Word docs, text files, code
9+
- Automatic chunking and embedding
10+
- Built-in vector database (LanceDB)
11+
- Multiple LLM provider support
12+
- Fully local, privacy-first
13+
14+
## Features
15+
16+
- **Document chat**: Upload and chat with any document
17+
- **Multi-LLM**: Use Ollama, OpenAI, Anthropic, or local models
18+
- **Built-in embeddings**: Automatic document vectorization
19+
- **Workspaces**: Organize documents into projects
20+
- **Agent support**: Automated workflows and tasks
21+
- **Web browsing**: Optional web search integration
22+
- **Multi-user**: Built-in authentication
23+
24+
## Configuration
25+
26+
### Environment Variables
27+
28+
| Variable | Description | Default |
29+
|----------|-------------|---------|
30+
| `ANYTHINGLLM_PORT` | External port | `3001` |
31+
| `ANYTHINGLLM_JWT_SECRET` | JWT signing secret | (required, 32+ chars) |
32+
| `ANYTHINGLLM_LLM_PROVIDER` | LLM backend | `ollama` |
33+
| `OLLAMA_BASE_PATH` | Ollama API URL | `http://ollama:11434` |
34+
| `OLLAMA_MODEL_PREF` | Default model | `llama3.2` |
35+
| `ANYTHINGLLM_EMBEDDING_ENGINE` | Embedding provider | `ollama` |
36+
| `EMBEDDING_MODEL_PREF` | Embedding model | `nomic-embed-text:latest` |
37+
| `ANYTHINGLLM_VECTOR_DB` | Vector database | `lancedb` |
38+
39+
### LLM Providers
40+
41+
Set `ANYTHINGLLM_LLM_PROVIDER` to one of:
42+
- `ollama` - Local models via Ollama
43+
- `openai` - OpenAI API
44+
- `anthropic` - Claude API
45+
- `azure` - Azure OpenAI
46+
- `localai` - LocalAI endpoint
47+
48+
## Usage
49+
50+
```bash
51+
# Enable the extension
52+
dream extensions enable anythingllm
53+
54+
# Start the service
55+
docker compose up -d anythingllm
56+
57+
# Access at http://localhost:3001
58+
# First run: Create admin account
59+
```
60+
61+
## Setup Steps
62+
63+
1. **Enable**: `dream extensions enable anythingllm`
64+
2. **Start**: `docker compose up -d anythingllm`
65+
3. **Open**: Visit http://localhost:3001
66+
4. **Create workspace**: Click "New Workspace"
67+
5. **Upload documents**: Drag & drop files
68+
6. **Chat**: Ask questions about your documents
69+
70+
## Data Persistence
71+
72+
All data stored in:
73+
- `./data/anythingllm/` - Documents, embeddings, settings
74+
75+
## Integration with Dream Server
76+
77+
By default, uses Dream Server's Ollama extension:
78+
- Set `OLLAMA_BASE_PATH=http://ollama:11434`
79+
- Models auto-detected from Ollama
80+
81+
To use llama-server instead:
82+
1. Set `ANYTHINGLLM_LLM_PROVIDER=openai`
83+
2. Set custom endpoint in UI to `${LLM_API_URL}`
84+
85+
## Security Note
86+
87+
⚠️ **Change the JWT secret before production use:**
88+
```bash
89+
# In your .env
90+
ANYTHINGLLM_JWT_SECRET=your-64-character-random-string-here-please-change-me
91+
```
92+
93+
## Resources
94+
95+
- [AnythingLLM Docs](https://docs.anythingllm.com/)
96+
- [GitHub Repository](https://github.com/Mintplex-Labs/anything-llm)
Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
services:
2+
anythingllm:
3+
image: mintplexlabs/anythingllm@sha256:8a1b5bfe6299a0c9481b3187eb84d1ab7830d578c056f4d6b6a84e0a5e75e585
4+
container_name: dream-anythingllm
5+
restart: unless-stopped
6+
security_opt:
7+
- no-new-privileges:true
8+
cap_add:
9+
- SYS_ADMIN
10+
environment:
11+
- STORAGE_DIR=/app/server/storage
12+
- JWT_SECRET=${ANYTHINGLLM_JWT_SECRET:-change-me-to-a-random-string-32-chars-min}
13+
- LLM_PROVIDER=${ANYTHINGLLM_LLM_PROVIDER:-ollama}
14+
- OLLAMA_BASE_PATH=${OLLAMA_BASE_PATH:-http://ollama:11434}
15+
- OLLAMA_MODEL_PREF=${OLLAMA_MODEL_PREF:-llama3.2}
16+
- OLLAMA_MODEL_TOKEN_LIMIT=${OLLAMA_MODEL_TOKEN_LIMIT:-4096}
17+
- EMBEDDING_ENGINE=${ANYTHINGLLM_EMBEDDING_ENGINE:-ollama}
18+
- EMBEDDING_BASE_PATH=${EMBEDDING_BASE_PATH:-http://ollama:11434}
19+
- EMBEDDING_MODEL_PREF=${EMBEDDING_MODEL_PREF:-nomic-embed-text:latest}
20+
- EMBEDDING_MODEL_MAX_CHUNK_LENGTH=${EMBEDDING_MODEL_MAX_CHUNK_LENGTH:-8192}
21+
- VECTOR_DB=${ANYTHINGLLM_VECTOR_DB:-lancedb}
22+
- WHISPER_PROVIDER=${ANYTHINGLLM_WHISPER_PROVIDER:-local}
23+
- TTS_PROVIDER=${ANYTHINGLLM_TTS_PROVIDER:-native}
24+
- PASSWORDMINCHAR=${ANYTHINGLLM_PASSWORD_MIN:-8}
25+
- AUTH_TOKEN=${ANYTHINGLLM_AUTH_TOKEN:-}
26+
volumes:
27+
- ./data/anythingllm:/app/server/storage:rw
28+
ports:
29+
- "${ANYTHINGLLM_PORT:-3001}:3001"
30+
healthcheck:
31+
test: ["CMD", "wget", "--spider", "-q", "http://localhost:3001/api/health"]
32+
interval: 30s
33+
timeout: 10s
34+
retries: 3
35+
start_period: 60s
36+
networks:
37+
- dream-network
38+
deploy:
39+
resources:
40+
limits:
41+
cpus: '2.0'
42+
memory: 4G
43+
reservations:
44+
cpus: '0.5'
45+
memory: 1G
46+
47+
networks:
48+
dream-network:
49+
external: true
Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
schema_version: dream.services.v1
2+
3+
service:
4+
id: anythingllm
5+
name: AnythingLLM
6+
aliases: [anything, allm]
7+
container_name: dream-anythingllm
8+
host_env: ANYTHINGLLM_HOST
9+
default_host: anythingllm
10+
port: 3001
11+
external_port_env: ANYTHINGLLM_PORT
12+
external_port_default: 3001
13+
health: /api/health
14+
type: docker
15+
gpu_backends: [nvidia, amd]
16+
category: optional
17+
depends_on: [ollama]
18+
description: |
19+
All-in-one AI productivity tool for RAG chat with documents.
20+
Built-in vector database, supports multiple LLM providers,
21+
and runs entirely on-device for privacy.
22+
23+
features:
24+
rag:
25+
description: RAG chat with uploaded documents
26+
vram_required_mb: 0
27+
chat:
28+
description: Chat interface with multiple LLM providers
29+
vram_required_mb: 0
30+
embed:
31+
description: Built-in embedding and vector search
32+
vram_required_mb: 0
33+
agents:
34+
description: AI agents for automated workflows
35+
vram_required_mb: 2048
36+
37+
tags:
38+
- chat
39+
- rag
40+
- documents
41+
- vector-db
42+
- privacy
43+
- local

0 commit comments

Comments
 (0)