Skip to content

Commit bed78f1

Browse files
authored
Merge pull request #44 from donvito/feature/docker-guide
feat: update Docker configuration and README for development environment
2 parents 5a46ca3 + d205312 commit bed78f1

File tree

2 files changed

+28
-6
lines changed

2 files changed

+28
-6
lines changed

README.md

Lines changed: 20 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -85,20 +85,37 @@ OLLAMA_BASE_URL=http://host.docker.internal:11434
8585
If deploying to production, set this in your .env file:
8686
```env
8787
NODE_ENV=production
88+
DEFAULT_ACCESS_TOKEN=your-secret-api-key
8889
OPENAI_API_KEY=your-openai-api-key
8990
ANTHROPIC_API_KEY=your-anthropic-api-key
9091
OPENROUTER_API_KEY=your-openrouter-api-key
9192
```
93+
You need to configure at least one provider api key. Otherwise, the app will not start.
9294

93-
### Using Docker Compose (experimental)
95+
### Using Docker Compose
9496
This will run AI Backends API server and Ollama containers using Docker
9597
- Ensure you have a .env configured as described in "Set up environment variables" below. You must set DEFAULT_ACCESS_TOKEN and at least one provider credential (or enable a local provider such as Ollama).
9698
- Start all services:
9799
```bash
98100
docker compose --env-file .env up -d --build
99101
```
100102

101-
- Useful commands:
103+
### Adding more models to Ollama container
104+
To add more models, you can edit the ollama service command in docker-compose.yml.
105+
106+
107+
For example, to add gemma3:4b, llama3.2:latest and llama3.2-vision:11b models, you can add the following to the ollama service command:
108+
```yml
109+
command: -c "ollama serve & sleep 5 && ollama pull gemma3:270m && ollama pull gemma3:4b && ollama pull llama3.2:latest && ollama pull llama3.2-vision:11b && wait"
110+
```
111+
You might need to adjust the timeout to give enough time for the models to be pulled.
112+
113+
```yml
114+
healthcheck:
115+
timeout: 120s //increase this if you're adding more models
116+
```
117+
118+
Useful commands:
102119
- View logs: docker compose logs -f app
103120
- Stop/remove: docker compose down
104121
@@ -183,7 +200,7 @@ curl --location 'http://localhost:3000/api/v1/summarize' \
183200
},
184201
"config": {
185202
"provider": "ollama",
186-
"model": "gemma3:4b",
203+
"model": "gemma3:270m",
187204
"temperature": 0
188205
}
189206
}'

docker-compose.yml

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,8 @@ services:
66
ports:
77
- "3000:3000"
88
environment:
9-
- OLLAMA_URL=http://ollama:11434/v1/
10-
- NODE_ENV=production
9+
- OLLAMA_BASE_URL=http://ollama:11434
10+
- NODE_ENV=development
1111
- OLLAMA_ENABLED=true
1212
depends_on:
1313
ollama:
@@ -25,7 +25,12 @@ services:
2525
networks:
2626
- ollama-net
2727
entrypoint: /bin/sh
28-
command: -c "ollama serve & sleep 5 && ollama pull gemma3:4b && wait"
28+
healthcheck:
29+
test: ["CMD", "ollama", "list"]
30+
interval: 30s
31+
timeout: 120s
32+
retries: 3
33+
command: -c "ollama serve & sleep 5 && ollama pull gemma3:270m && wait"
2934

3035
volumes:
3136
ollama-data:

0 commit comments

Comments
 (0)