A complete rewrite with SvelteKit + Tauri featuring:
- Modern responsive UI with dark/light themes
- Native desktop app (macOS, Windows, Linux)
- MCP Studio for building and testing MCP servers
- Visual multi-agent workflow builder
- Tools debugger with model comparison
- Prompt Lab with A/B testing and version control
Ollama Workbench is a comprehensive, enterprise-grade platform for managing, testing, and utilizing AI models from the Ollama library and external providers. Built with security, scalability, and observability at its core, it provides advanced features for AI agent orchestration, workflow automation, and collaborative AI development.
One-Command Setup:
git clone https://github.com/marc-shade/Ollama-Workbench.git
cd Ollama-Workbench
python setup_workbench.pyStart the Platform:
./start_workbench.sh # Unix/Linux/macOS
start_workbench.bat # WindowsAccess the Interface:
- Web UI: http://localhost:8501
- Default login: No authentication required initially (configurable)
- 🌟 Key Features
- 🛡️ Security Features
- 📊 Observability Features
- 💬 Chat & AI Interaction
- ⚙️ Advanced Workflows
- 🗄️ Knowledge Management
- 🛠️ Model Management
- 📊 Testing & Evaluation
- 🔧 Installation
- 📚 Documentation
- 🤝 Contributing
- Zero-Trust Security with comprehensive RBAC system
- End-to-End Encryption for data at rest and in transit
- Comprehensive Audit Logging for compliance (GDPR, HIPAA, SOX, ISO27001)
- Advanced Observability with Opik integration and performance monitoring
- Scalable Microservices architecture with FastAPI and Streamlit
- Ollama Models - Local LLM execution with full privacy
- OpenAI API - GPT-3.5, GPT-4, and latest models
- Groq - Ultra-fast inference with Llama and Mixtral
- Mistral AI - European AI excellence
- Unified Interface - Single platform for all providers
- Multi-Agent Systems with specialized AI agents
- Visual Workflow Builder (Nodes) for complex pipelines
- Project Management with AI-assisted planning
- Brainstorming Sessions with collaborative AI teams
- Research Automation with multi-source intelligence
- JWT-based Authentication with configurable session management
- Multi-Factor Authentication support (foundation ready)
- Role-Based Access Control (RBAC) with granular permissions
- Account Lockout Protection with rate limiting
- Password Policy Enforcement with complexity requirements
- AES-256-GCM Encryption for sensitive data
- RSA Encryption for key exchange and secrets
- Automatic Key Rotation with configurable intervals
- Secure API Key Storage with encryption at rest
- Data Classification and loss prevention
- Comprehensive Audit Trails with tamper-proof logging
- Compliance Templates for GDPR, HIPAA, SOX, ISO27001
- Real-time Security Monitoring with threat detection
- Automated Compliance Reporting with violation alerts
- Data Retention Policies with automated cleanup
- Real-time Metrics with custom dashboards
- Performance Alerts with configurable thresholds
- Token Usage Tracking across all providers
- Response Time Analysis with percentile breakdowns
- Resource Utilization monitoring (CPU, memory, GPU)
- Opik Integration for comprehensive LLM observability
- Request/Response Tracing with full context capture
- Error Tracking with detailed error analysis
- Cost Analytics across different AI providers
- Usage Pattern Analysis for optimization insights
- Usage Reports with trend analysis
- Model Performance Comparisons across providers
- Security Event Analysis with risk assessment
- Capacity Planning with growth projections
- Export Capabilities for external analysis
- Pre-built Agent Types: Coder, Analyst, Creative Writer, Researcher
- Custom Agent Creation with specialized prompts and behaviors
- Metacognitive Enhancements: Chain of Thought, Visualization of Thought
- Voice & Personality Customization for tailored interactions
- Memory Systems with episodic and semantic memory support
- Text-to-Speech (TTS) with multiple voice options
- Speech Recognition for voice input
- Vision Model Support for image analysis and description
- Document Processing with OCR and extraction
- File Upload & Analysis for various formats
- Dynamic Corpus Integration for contextual enhancement
- Vector Database Support with ChromaDB integration
- Semantic Search with advanced embedding models
- Document Chunking with intelligent splitting strategies
- Real-time Context Injection for improved responses
- Brainstorm Mode: Collaborative AI teams for ideation
- Research Workflows: Multi-source intelligence gathering
- Build System: Autonomous software development with specialized agents
- Project Management: AI-assisted task planning and execution
- Quality Assurance: Automated testing and review processes
- Drag-and-Drop Interface for workflow design
- Pre-built Components: AI models, data processors, connectors
- Custom Node Creation with Python scripting support
- Conditional Logic with branching and loops
- Real-time Execution with progress monitoring
- API Gateway with rate limiting and authentication
- Webhook Support for external integrations
- Database Connectors for various data sources
- File System Integration with secure access controls
- Third-party Tool Integration via standardized protocols
- Text Corpus Creation from various sources
- Web Content Integration with intelligent extraction
- Document Processing with format support (PDF, DOCX, TXT, MD)
- Knowledge Graph Construction for relationship mapping
- Version Control for corpus evolution tracking
- Web Scraping with respect for robots.txt
- API Data Integration from external services
- Database Connectivity for structured data
- File Upload System with virus scanning
- Real-time Data Feeds for dynamic knowledge
- Full-text Search across all knowledge bases
- Semantic Search with embedding-based similarity
- Faceted Search with filters and categories
- Search Analytics for usage optimization
- Personalized Recommendations based on user patterns
- Local Model Management: List, download, update, remove Ollama models
- Model Information: Detailed specs, capabilities, and performance metrics
- Health Monitoring: Model availability and performance tracking
- Usage Analytics: Token consumption and cost analysis
- Model Comparison: Side-by-side performance evaluation
- Ollama Server Management: Start, stop, configure, monitor
- Resource Allocation: CPU, memory, and GPU settings
- Concurrency Controls: Request queuing and parallel processing
- Performance Tuning: Optimization for specific workloads
- Backup & Recovery: Model and configuration backup
- API Key Management: Secure storage and rotation
- Rate Limit Monitoring: Usage tracking and alerts
- Cost Management: Budget controls and spending alerts
- Provider Comparison: Performance and cost analysis
- Failover Configuration: Automatic provider switching
- Feature Testing: JSON handling, function calling, structured output
- Performance Benchmarking: Speed, accuracy, and consistency
- Comparative Analysis: Multi-model evaluation on same tasks
- Regression Testing: Automated testing for model updates
- Custom Test Suites: Domain-specific evaluation frameworks
- Response Quality Metrics: Coherence, relevance, factuality
- Contextual Understanding: Multi-turn conversation evaluation
- Bias Detection: Automated bias and fairness testing
- Safety Evaluation: Content safety and harmfulness detection
- Performance Profiling: Resource usage and optimization
- Image Understanding: Object detection and scene analysis
- OCR Capabilities: Text extraction from images
- Visual Question Answering: Image-based reasoning
- Art and Design: Creative image analysis
- Medical Imaging: Specialized vision model testing
- Python: 3.8+ (recommended: 3.11)
- Operating System: Windows, macOS, Linux
- Memory: 8GB+ RAM (16GB+ recommended)
- Storage: 10GB+ free space
- Network: Internet connection for model downloads
# Clone the repository
git clone https://github.com/marc-shade/Ollama-Workbench.git
cd Ollama-Workbench
# Run automated setup
python setup_workbench.pyThis script automatically:
- ✅ Creates virtual environment
- ✅ Installs all dependencies
- ✅ Downloads and configures Ollama
- ✅ Initializes security framework
- ✅ Sets up directories and configuration
- ✅ Creates startup scripts
- ✅ Runs basic tests
# Create virtual environment
python -m venv venv
source venv/bin/activate # Unix/Linux/macOS
venv\Scripts\activate # Windows
# Install dependencies
pip install -r requirements.txt
# Install Ollama (if not already installed)
curl -fsSL https://ollama.ai/install.sh | sh # Unix/Linux/macOS
# For Windows: Download from https://ollama.ai/download
# Start Ollama server
ollama serve
# Download a default model
ollama pull llama3.2:1b
# Start the workbench
streamlit run main.pyInitial configuration is created automatically, but you can customize:
{
"OLLAMA_HOST": "http://localhost:11434",
"WORKBENCH_PORT": 8501,
"ENABLE_ENHANCED_SECURITY": true,
"ENABLE_AUTH": false,
"ENABLE_RBAC": true,
"ENABLE_AUDIT_LOGGING": true,
"ENABLE_ENCRYPTION": true,
"ENABLE_OBSERVABILITY": true
}- Technical Architecture - System design and architecture
- Security & Compliance - Security features and compliance
- API Documentation - Pipeline framework and APIs
- Deployment Guide - Production deployment strategies
- Contributing Guide - Development guidelines
- User Stories - Detailed use cases and workflows
- Implementation Roadmap - Development timeline
- UX Design Guide - Interface design principles
- Observability Specification - Monitoring and tracing
- Glossary - Domain terminology and concepts
- Quick Start Guide - Get up and running in minutes
- First Steps Tutorial - Basic platform usage
- Security Setup Guide - Enable authentication and RBAC
- Workflow Creation Tutorial - Build your first workflow
- API Integration Guide - Connect external services
- Custom Agent Development - Build specialized AI agents
- Security Hardening - Production security setup
- Performance Optimization - Scale and optimize
- Compliance Configuration - Meet regulatory requirements
- Monitoring & Alerting - Comprehensive observability
We welcome contributions from the community! Please see our Contributing Guide for details.
# Clone and setup development environment
git clone https://github.com/marc-shade/Ollama-Workbench.git
cd Ollama-Workbench
python setup_workbench.py
# Install development dependencies
pip install -r requirements-dev.txt
# Run tests
python -m pytest tests/
# Run linting
ruff check .
flake8 .
# Run security checks
python -m bandit -r .- 🐛 Bug Reports: Help us identify and fix issues
- 🚀 Feature Requests: Suggest new capabilities
- 📖 Documentation: Improve guides and tutorials
- 🧪 Testing: Add test cases and improve coverage
- 🔒 Security: Enhance security features and audit code
- 🎨 UI/UX: Improve user interface and experience
- 🔌 Integrations: Add support for new AI providers
This project is licensed under the MIT License - see the LICENSE file for details.
- 📧 Issues: GitHub Issues
- 💬 Discussions: GitHub Discussions
- 📚 Documentation: Complete guides in
/docsdirectory - 🐛 Bug Reports: Use issue templates for better tracking
- 💡 Feature Requests: Share your ideas for improvements
- Ollama Team - For the excellent local LLM runtime
- Streamlit - For the amazing web app framework
- OpenAI, Groq, Mistral - For their powerful AI APIs
- Opik - For comprehensive LLM observability
- Contributors - For making this project better every day
Made with ❤️ by the Ollama Workbench Community
Transform your AI development with enterprise-grade tools, security, and observability.
