Quick reference for all testing scripts in the Stream Daemon project.
| Script | Purpose | Environment | Run Time |
|---|---|---|---|
tests/test_connection.py |
Full production integration test | Local | 30s-1min |
tests/test_local_install.py |
Validate local Python installation | Local | 5-10s |
tests/test_ollama.py |
Test Ollama AI integration | Local/Docker | 30s |
test_ollama_quick.sh |
Quick Ollama connectivity check | Local | 5s |
test_docker_build.sh |
Build and test Docker image | Docker | 2-5min |
tests/run_all_tests.py |
Full test suite | Local | 1-2min |
# Run comprehensive production test with real .env data
python3 tests/test_connection.pyTests:
- ✅ Streaming platform authentication (Twitch, YouTube, Kick)
- ✅ Social platform authentication (Mastodon, Bluesky, Discord, Matrix)
- ✅ LLM provider authentication (Ollama or Gemini)
- ✅ Live stream detection
- ✅ AI message generation with real stream data
- ✅ Optional: Post AI message to all social platforms
- ✅ Production readiness validation
# Run comprehensive local environment test
python3 tests/test_local_install.pyTests:
- ✅ Python 3.10+ version
- ✅ Core dependencies (Twitch, Mastodon, Bluesky, Discord, Matrix)
- ✅ AI dependencies (Gemini, Ollama) - optional
- ✅ Security providers (AWS, Vault, Doppler) - optional
- ✅ Stream Daemon module imports
- ✅ CVE security patches
# Run comprehensive Docker build test
./test_docker_build.shTests:
- ✅ Docker installation
- ✅ Docker daemon status
- ✅ Image build
- ✅ Python in container
- ✅ Dependencies in container
- ✅ docker-compose validation
# Quick connectivity test
./test_ollama_quick.sh
# Full AI generation test
python3 tests/test_ollama.pyRequirements:
- Ollama server running (local or remote)
.envconfigured with Ollama settings
# Run all unit and integration tests
python3 tests/run_all_tests.pyPurpose: Comprehensive production integration test that validates your entire setup using real .env configuration and live stream data.
Usage:
python3 tests/test_connection.pyWhat it tests:
- Streaming Platforms - Authenticates and checks live status for all enabled platforms (Twitch, YouTube, Kick)
- Social Platforms - Authenticates all enabled social media platforms (Mastodon, Bluesky, Discord, Matrix)
- LLM Provider - Tests AI message generation using real live stream data
- End-to-End Flow - Optionally posts AI-generated messages to all social platforms
Exit Codes:
0- All tests passed, production ready1- One or more tests failed
Output Example:
🔬 Stream Daemon - Production Integration Test
================================================
📡 Testing Streaming Platforms...
✅ Twitch - LIVE - 110 viewers
✅ YouTube - LIVE - 245 viewers
✅ Kick - LIVE - 11,002 viewers
💬 Testing Social Platforms...
✅ Mastodon - Authenticated
✅ Bluesky - Authenticated
✅ Discord - Authenticated
✅ Matrix - Authenticated
🤖 Testing LLM Generation...
✅ Generated AI message (258 chars)
Would you like to post this to all social platforms? (yes/no): yes
✅ Mastodon - Posted (ID: 115923811563045709)
✅ Bluesky - Posted (ID: 3mcskic5row2b)
✅ Discord - Posted (ID: 1462916988945825874)
✅ Matrix - Posted (ID: $rpa3cON...)
================================================
✅ PRODUCTION READY - All systems operational!
================================================
Purpose: Validates that your local Python environment has all required dependencies and meets security requirements.
Usage:
python3 tests/test_local_install.pyExit Codes:
0- All tests passed1- One or more tests failed
Output Example:
Stream Daemon - Local Installation Test
========================================
[1/7] Checking Python version...
✅ Python 3.11.2
[2/7] Checking core dependencies...
✅ All core dependencies installed
[3/7] Checking AI/LLM dependencies...
✅ Gemini client available
✅ Ollama client available
[4/7] Checking security dependencies...
✅ AWS Secrets Manager available
⚠️ Vault client not installed (optional)
[5/7] Checking requirements.txt...
✅ requirements.txt found and readable
[6/7] Testing Stream Daemon imports...
✅ All Stream Daemon modules importable
[7/7] Checking CVE-affected packages...
✅ requests >= 2.32.5
✅ urllib3 >= 2.5.0
✅ protobuf >= 6.33.1
====================================
✅ SUCCESS: 7/7 tests passed
====================================
Purpose: Builds a Docker image and validates the containerized environment.
Usage:
./test_docker_build.shWhat it does:
- Checks Docker installation
- Verifies Docker daemon is running
- Builds image from
Docker/Dockerfile - Reports image size
- Tests Python in container
- Tests dependency imports in container
- Validates docker-compose.yml (if present)
Output Example:
🐳 Stream Daemon - Docker Build & Test
======================================
[1/6] Checking Docker installation...
✅ Docker version 24.0.5
[2/6] Checking Docker daemon...
✅ Docker daemon is running
[3/6] Building Docker image...
✅ Docker image built successfully
[4/6] Checking image size...
✅ Image size: 1.2GB
[5/6] Testing Python environment...
✅ Python 3.11.2
[6/6] Testing dependencies...
✅ All dependencies working
====================================
✅ SUCCESS - Docker build passed!
====================================
Purpose: Comprehensive test of Ollama AI integration with message generation.
Prerequisites:
# .env must contain:
LLM_ENABLE=True
LLM_PROVIDER=ollama
LLM_OLLAMA_HOST=http://192.168.1.100
LLM_OLLAMA_PORT=11434
LLM_MODEL=gemma2:2bUsage:
# Local testing
python3 tests/test_ollama.py
# Docker testing
docker run --rm --env-file .env stream-daemon:test python3 test_ollama.pyWhat it tests:
- ✅ Ollama server connectivity
- ✅ Model availability
- ✅ Bluesky message generation (300 char limit)
- ✅ Mastodon message generation (500 char limit)
- ✅ Stream end message generation
Output Example:
Testing Ollama Integration
===========================
✅ Ollama connection initialized
✅ Connected to: http://192.168.1.100:11434
✅ Model: gemma3:4b
Generated Bluesky message (202 chars):
🎮 Live now! Join us for an epic gaming session...
Generated Mastodon message (286 chars):
Hey everyone! 👋 We're going live right now...
Generated stream end message (208 chars):
That's a wrap! 🎬 Thanks to everyone...
✅ SUCCESS: All Ollama tests passed!
Purpose: Quick connectivity check to Ollama server.
Usage:
./test_ollama_quick.shWhat it does:
- Loads Ollama settings from
.env - Tests HTTP connection to Ollama API
- Lists available models
Output Example:
Quick Ollama Connectivity Test
===============================
Testing connection to: http://192.168.1.100:11434
✅ Ollama server is reachable
Available models:
- gemma3:4b
- llama3.2:3b
- mistral:7b
✅ Connection test passed!
Common issues:
-
Python version too old
sudo apt install python3.11
-
Missing dependencies
pip3 install -r requirements.txt
-
CVE warnings
pip3 install --upgrade requests urllib3 protobuf
Common issues:
-
Docker not installed
- Install: https://docs.docker.com/get-docker/
-
Docker daemon not running
sudo systemctl start docker
-
Build fails
# Clear cache and rebuild docker builder prune -a docker build --no-cache -t stream-daemon:test -f Docker/Dockerfile .
Common issues:
-
Connection refused
- Check Ollama is running:
curl http://YOUR_IP:11434/api/tags - Verify
.envhas correct IP/port - Check firewall:
sudo ufw allow 11434/tcp
- Check Ollama is running:
-
Model not found
# List available models ollama list # Pull missing model ollama pull gemma3:4b
-
From Docker container
- Use actual IP address, not
localhost - Ensure Docker network can reach Ollama server
- Test:
docker run --rm stream-daemon:test ping YOUR_OLLAMA_IP
- Use actual IP address, not
These test scripts can be integrated into continuous integration pipelines:
# Example GitHub Actions workflow
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Test installation
run: python3 test_local_install.py
- name: Build Docker
run: ./test_docker_build.sh
- name: Run test suite
run: python3 tests/run_all_tests.py- Check docs/development/testing.md for detailed troubleshooting
- Review test output carefully - it shows specific remediation steps
- Open an issue on GitHub with test output if problems persist