A comprehensive conversational user interface for Ollama with integrated model management, built with PyQt/PySide using the qtpy compatibility layer.
- Clean Chat Interface: Modern chat bubble interface similar to popular messaging apps
- Model Selection: Dropdown to select from available Ollama models
- Streaming Responses: Real-time streaming of AI responses
- Stop Generation: Ability to stop response generation mid-stream
- Chat History: Maintains conversation context
- Clear Chat: Option to clear conversation history
- Enhanced Controls: Temperature and token limit controls
- Service Management: Start and stop Ollama service directly from the UI
- Model Download: Easy download interface with popular model suggestions
- Model Deletion: Safe model removal with confirmation
- Real-time Status: Service status monitoring with visual indicators
- Non-blocking Operations: All operations run in background threads
- Auto-detection: Automatic Ollama installation detection
- Tabbed Interface: Organized chat and management in separate tabs
- Model Synchronization: Models automatically sync between tabs
- Responsive UI: Non-blocking interface with smooth interactions
- Error Handling: Comprehensive error handling and user feedback
- Cross-platform: Windows, macOS, and Linux support
- Python 3.7+
- Ollama server (automatically managed by the UI)
- qtpy (compatibility layer for PyQt5/PySide2)
- requests
- psutil (for service management)
-
Install the required Python packages:
pip install -r requirements.txt
-
The application will help you set up Ollama if not already installed
- Start the Ollama server if it's not already running
- Run the application:
python main.py
- Select a model from the dropdown (click "Refresh Models" if needed)
- Start chatting!
- Model Selection: Dropdown to choose which Ollama model to use
- Refresh Models: Button to reload available models from Ollama
- Clear Chat: Button to clear the conversation history
- Chat Area: Scrollable area displaying conversation bubbles
- Input Field: Text field for typing messages
- Send Button: Send the current message
- Stop Button: Stop the current response generation
- Status Bar: Shows current application status
- User Messages: Blue bubbles on the right side
- AI Responses: Gray bubbles on the left side
- Auto-scrolling: Automatically scrolls to show latest messages
The application connects to Ollama at http://localhost:11434 by default. You can modify this in the OllamaClient class initialization if your Ollama server is running on a different host or port.
- Make sure Ollama is running:
ollama serve - Verify you have models installed:
ollama list - Try pulling a model:
ollama pull llama2
- Check if Ollama is running on the correct port
- Verify firewall settings aren't blocking the connection
- Try restarting the Ollama service
- Make sure qtpy and a Qt backend (PyQt5/PySide2) are installed
- Try running with different Qt backends if issues persist
Feel free to submit issues and enhancement requests!
This project is open source and available under the MIT License.