This guide covers testing procedures for Stream Daemon in both local and Docker environments.
- Prerequisites
- Local Environment Testing
- Docker Environment Testing
- Ollama Integration Testing
- Continuous Integration
- Git
- Access to streaming platform APIs (Twitch, YouTube, Kick)
- Access to social platform APIs (Bluesky, Mastodon, Discord, Matrix)
- Python 3.10+
- pip or pip3
- Virtual environment (recommended)
- Docker Engine 20.10+
- docker-compose (optional)
- 2GB+ free disk space
Run the comprehensive installation test:
python3 test_local_install.pyThis test validates:
- ✅ Python version (3.10+)
- ✅ Core dependencies (TwitchAPI, Mastodon, Bluesky, Discord, Matrix)
- ✅ AI/LLM dependencies (Gemini, Ollama)
- ✅ Security providers (AWS Secrets, Vault, Doppler)
- ✅ Stream Daemon module imports
- ✅ CVE-affected package versions
Expected Output:
✅ SUCCESS: 7/7 tests passed
If you prefer to check manually:
# Check Python version
python3 --version
# Install dependencies
pip3 install -r requirements.txt
# Test imports
python3 -c "import twitchAPI; import mastodon; import atproto; print('Core deps OK')"
python3 -c "import google.genai; import ollama; print('AI deps OK')"Test that all Stream Daemon modules load correctly:
python3 -c "from stream_daemon import messaging, publisher; print('Modules OK')"
python3 -c "from stream_daemon.ai import generator; print('AI module OK')"
python3 -c "from stream_daemon.platforms.streaming import twitch, youtube, kick; print('Streaming OK')"
python3 -c "from stream_daemon.platforms.social import bluesky, mastodon, discord, matrix; print('Social OK')"Verify critical packages meet minimum versions:
pip3 list | grep -E "requests|urllib3|protobuf"Required versions:
requests >= 2.32.5urllib3 >= 2.5.0protobuf >= 6.33.1
Run the comprehensive Docker test:
./test_docker_build.shThis test validates:
- ✅ Docker installation
- ✅ Docker daemon running
- ✅ Image builds successfully
- ✅ Python works in container
- ✅ All dependencies install
- ✅ docker-compose configuration (if present)
Expected Output:
✅ SUCCESS - Docker build and tests passed!
Build the image manually:
cd /path/to/twitch-and-toot
docker build -t stream-daemon:test -f Docker/Dockerfile .Troubleshooting build failures:
- Check Dockerfile syntax
- Verify requirements.txt is present
- Ensure base image is accessible
- Check Docker daemon logs:
docker system events
Start a test container:
# Create test .env
cp .env.example .env
# Edit .env with your credentials
# Run container
docker run --rm --env-file .env stream-daemon:test python3 --version
# Test with actual daemon
docker run --rm --env-file .env stream-daemon:test python3 stream-daemon.pyIf using docker-compose:
# Validate configuration
docker-compose -f Docker/docker-compose.yml config
# Start services
docker-compose -f Docker/docker-compose.yml up -d
# Check logs
docker-compose -f Docker/docker-compose.yml logs -f
# Stop services
docker-compose -f Docker/docker-compose.yml down./test_ollama_quick.shThis tests basic connectivity to your Ollama server.
# Ensure .env is configured with Ollama settings
python3 test_ollama.pyExpected output:
✅ Ollama connection initialized
✅ Connected to: http://YOUR_IP:11434
✅ Model: gemma3:4b
Generated Bluesky message (202 chars):
[AI-generated message content]
Generated Mastodon message (286 chars):
[AI-generated message content]
Generated stream end message (208 chars):
[AI-generated message content]
✅ SUCCESS: All Ollama tests passed!
LLM_ENABLE=True
LLM_PROVIDER=ollama
LLM_OLLAMA_HOST=http://192.168.1.100
LLM_OLLAMA_PORT=11434
LLM_MODEL=gemma3:4bTest Ollama from within Docker container:
docker run --rm --env-file .env \
-e LLM_ENABLE=True \
-e LLM_PROVIDER=ollama \
-e LLM_OLLAMA_HOST=http://YOUR_OLLAMA_IP:11434 \
-e LLM_MODEL=gemma3:4b \
stream-daemon:test python3 test_ollama.pyNote: Ensure Docker container can reach your Ollama server's IP address. Do not use localhost - use actual IP or Docker network names.
If Docker can't reach Ollama:
# Test connectivity from container
docker run --rm stream-daemon:test ping -c 3 YOUR_OLLAMA_IP
# Check Ollama is listening on all interfaces
curl http://YOUR_OLLAMA_IP:11434/api/tags
# Verify firewall rules allow Docker bridge network
sudo iptables -L | grep 11434Before committing code:
# Run all tests
python3 tests/run_all_tests.py
# Check for security issues
pip3 check
# Validate syntax
python3 -m py_compile stream_daemon/**/*.py| Test Type | Local | Docker | CI/CD |
|---|---|---|---|
| Installation | ✅ | ✅ | ✅ |
| Unit Tests | ✅ | ✅ | ✅ |
| Integration | ✅ | ✅ | |
| Ollama | ✅ | ✅ | ❌ |
| Gemini | ✅ | ✅ |
Legend:
- ✅ Fully supported
⚠️ Requires credentials/configuration- ❌ Not available (requires local server)
Error: Python 3.10+ required, found 3.9.x
Solution:
# Ubuntu/Debian
sudo apt install python3.10 python3.10-venv
# Update alternatives
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.10 1Error: ModuleNotFoundError: No module named 'X'
Solution:
# Reinstall dependencies
pip3 install -r requirements.txt --force-reinstall
# Or install specific package
pip3 install XError: ERROR: failed to solve
Solution:
# Clear Docker cache
docker builder prune -a
# Rebuild without cache
docker build --no-cache -t stream-daemon:test -f Docker/Dockerfile .Error: Connection refused or Failed to connect
Solution:
- Verify Ollama is running:
curl http://YOUR_IP:11434/api/tags - Check firewall:
sudo ufw allow 11434/tcp - Use IP address, not
localhostfrom Docker - Verify .env has correct
LLM_OLLAMA_HOSTandLLM_OLLAMA_PORT
After successful testing:
- Configure platforms: Set up API credentials in
.env - Start monitoring: Run
python3 stream-daemon.pyor use Docker - Monitor logs: Check for successful connections and message posts
- Production deployment: Follow systemd-service.md for persistent service
- Running Tests - Detailed test suite documentation
- AI Messages - AI/LLM setup guide
- Ollama Migration - Ollama-specific setup
- Quick Reference - Helper scripts