You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If deploying to production, set this in your .env file:
86
86
```env
87
87
NODE_ENV=production
88
+
DEFAULT_ACCESS_TOKEN=your-secret-api-key
88
89
OPENAI_API_KEY=your-openai-api-key
89
90
ANTHROPIC_API_KEY=your-anthropic-api-key
90
91
OPENROUTER_API_KEY=your-openrouter-api-key
91
92
```
93
+
You need to configure at least one provider api key. Otherwise, the app will not start.
92
94
93
-
### Using Docker Compose (experimental)
95
+
### Using Docker Compose
94
96
This will run AI Backends API server and Ollama containers using Docker
95
97
- Ensure you have a .env configured as described in "Set up environment variables" below. You must set DEFAULT_ACCESS_TOKEN and at least one provider credential (or enable a local provider such as Ollama).
96
98
- Start all services:
97
99
```bash
98
100
docker compose --env-file .env up -d --build
99
101
```
100
102
101
-
- Useful commands:
103
+
### Adding more models to Ollama container
104
+
To add more models, you can edit the ollama service command in docker-compose.yml.
105
+
106
+
107
+
For example, to add gemma3:4b, llama3.2:latest and llama3.2-vision:11b models, you can add the following to the ollama service command:
0 commit comments