This directory contains Docker configurations for running PraisonAI services in containerized environments. The setup addresses directory management issues and provides comprehensive multi-service deployment options.
- UI Service (
port 8082) - Chainlit-based web interface - Chat Service (
port 8083) - Dedicated chat interface - API Service (
port 8080) - REST API endpoint - Agents Service - Standalone PraisonAI Agents runtime
Dockerfile- Basic API serviceDockerfile.ui- UI service with web interfaceDockerfile.chat- Chat-focused serviceDockerfile.dev- Development environment with toolsDockerfile.praisonaiagents- Standalone agents frameworkdocker-compose.yml- Multi-service orchestration
# Run UI service
docker run -p 8082:8082 -e OPENAI_API_KEY=your_key ghcr.io/mervinpraison/praisonai:ui
# Run Chat service
docker run -p 8083:8083 -e OPENAI_API_KEY=your_key ghcr.io/mervinpraison/praisonai:chat
# Run API service
docker run -p 8080:8080 -e OPENAI_API_KEY=your_key ghcr.io/mervinpraison/praisonai:api# Create environment file
cat > .env << EOF
OPENAI_API_KEY=your_openai_api_key_here
CHAINLIT_AUTH_SECRET=your_secret_here
EOF
# Start all services
docker-compose up -d
# View logs
docker-compose logs -f
# Stop services
docker-compose downThe original issue was that files like chainlit.md, .chainlit directory, and public folder were cluttering the root directory.
All PraisonAI configuration and runtime files are now stored in ~/.praison/:
~/.praison/
βββ database.sqlite # Chainlit database
βββ chainlit.md # Chainlit configuration
βββ .chainlit/ # Chainlit runtime files
βββ config/ # PraisonAI configurationPRAISON_CONFIG_DIR=/root/.praison # Main config directory
CHAINLIT_CONFIG_DIR=/root/.praison # Chainlit config location
CHAINLIT_DB_DIR=/root/.praison # Database locationenvironment:
- CHAINLIT_PORT=8082
- CHAINLIT_HOST=0.0.0.0
- OPENAI_API_KEY=${OPENAI_API_KEY}
- CHAINLIT_AUTH_SECRET=${CHAINLIT_AUTH_SECRET}environment:
- CHAINLIT_PORT=8083
- CHAINLIT_HOST=0.0.0.0
- OPENAI_API_KEY=${OPENAI_API_KEY}environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}| Service | Port | Endpoint | Description |
|---|---|---|---|
| UI | 8082 | http://localhost:8082 | Web interface |
| Chat | 8083 | http://localhost:8083 | Chat interface |
| API | 8080 | http://localhost:8080 | REST API |
| API Health | 8080 | http://localhost:8080/health | Health check |
All services include health checks:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:PORT"]
interval: 10s
timeout: 5s
retries: 3All Docker images use consistent, up-to-date versions:
- PraisonAI:
>=2.2.27 - PraisonAI Agents:
>=0.0.92 - Python:
3.11-slim
- Non-root user execution where possible
- Minimal base image (python:3.11-slim)
- No unnecessary packages installed
- Environment variable-based configuration
- Volume mounting for persistent data
# Use development Dockerfile with additional tools
docker build -f Dockerfile.dev -t praisonai:dev .
docker run -it -v $(pwd):/app praisonai:dev bash# Mount custom config directory
docker run -v ~/.praison:/root/.praison praisonai:ui# View service status
docker-compose ps
# View resource usage
docker-compose top
# View logs for specific service
docker-compose logs ui
docker-compose logs chat
docker-compose logs api-
Port conflicts
# Check port usage netstat -tlnp | grep :8082 # Use different ports docker run -p 9082:8082 praisonai:ui
-
Environment variables not loading
# Verify .env file cat .env # Set variables directly docker run -e OPENAI_API_KEY=your_key praisonai:ui
-
Permission issues
# Check volume permissions ls -la ~/.praison/ # Fix permissions sudo chown -R $(id -u):$(id -g) ~/.praison/
-
Service won't start
# Check logs docker-compose logs service_name # Restart service docker-compose restart service_name
# Pull latest images
docker-compose pull
# Restart with new images
docker-compose up -dTo use specific versions, update the Dockerfile:
RUN pip install "praisonai==2.2.27" "praisonaiagents==0.0.92"# docker-compose.prod.yml
version: '3.8'
services:
ui:
image: ghcr.io/mervinpraison/praisonai:ui
restart: unless-stopped
environment:
- CHAINLIT_HOST=0.0.0.0
- CHAINLIT_PORT=8082
volumes:
- praison_data:/root/.praison
networks:
- praison_network
deploy:
resources:
limits:
memory: 1G
reservations:
memory: 512MFor production environments, consider using nginx or similar:
upstream praisonai {
server localhost:8082;
server localhost:8083;
}
server {
listen 80;
location / {
proxy_pass http://praisonai;
}
}This Docker setup provides a clean, organized, and scalable way to deploy PraisonAI services while solving the directory management issues mentioned in the original request.