cctvQL ships with a Dockerfile and a Docker Compose stack that bundles the application with a local Ollama instance.
# 1. Copy and edit the config
cp config/example.yaml config/config.yaml
# 2. Start services
docker compose up -dThis brings up two containers:
| Service | Port | Description |
|---|---|---|
cctvql |
8000 |
REST API server |
ollama |
11434 |
Local LLM backend |
Mount your config file as a read-only volume:
volumes:
- ./config/config.yaml:/app/config/config.yaml:roFor cloud LLM backends, set API keys via environment variables in docker-compose.yml:
environment:
OPENAI_API_KEY: "sk-..."
# or
ANTHROPIC_API_KEY: "sk-ant-..."To enable GPU acceleration for Ollama, uncomment the deploy section in docker-compose.yml:
ollama:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]# Build only
docker build -t cctvql .
# Run standalone (without Ollama)
docker run -p 8000:8000 \
-v ./config/config.yaml:/app/config/config.yaml:ro \
cctvql- Pin the Ollama image tag instead of using
latestfor reproducible builds. - Use Docker secrets or an external secret manager for API keys instead of plain-text environment variables.
- Add a reverse proxy (Nginx, Traefik) in front of cctvQL for TLS termination.
- Set resource limits on containers to prevent runaway memory usage.