Russian README - README_RU.md
ColAI is a fully offline platform for collaborative work of multiple neural networks. The system allows multiple AI models to communicate with each other, play games (such as Mafia), discuss projects and work together using local Ollama models.
- Collaborative Mode: Up to 8 specialized neural networks work together on any topic
- Mafia Mode: AI players participate in Mafia game with realistic behavior
- Fully Offline: Everything works locally through Ollama, no dependency on external APIs
- Flexible Model Configuration: Choice of any Ollama model at startup
- Multimodality: Support for image and document uploads
- Live Chat: Dynamic communication between networks with initiative and fragmented messages
- OS: Windows 10/11, macOS 10.15+, Linux (Ubuntu 20.04+)
- RAM: 8 GB (16 GB recommended for large models)
- Storage: 20 GB free space (for models)
- CPU: Modern processor with AVX2 support
- GPU: Optional, but NVIDIA GPU with 6+ GB VRAM recommended for better performance
- RAM: 32 GB
- GPU: NVIDIA RTX 3060 or better (12+ GB VRAM)
- Storage: 50+ GB SSD
- Download Node.js from the official website
- Install the LTS version (18.x or higher recommended)
- Verify installation:
node --version
npm --versionStep 2: Install Ollama
Windows:
- Download the installer from ollama.ai
- Run the installer and follow instructions
- Ollama will be automatically added to PATH
macOS:
brew install ollama
# or download from ollama.aiLinux:
curl -fsSL https://ollama.ai/install.sh | shStep 3: Start Ollama
Open terminal and run:
ollama serveOllama will be available at http://localhost:11434
Important: Ollama must be running before using ColAI!
Step 4: Download Models
Recommended models for ColAI:
# Main model (recommended)
ollama pull qwen2.5:14b
# Alternative models
ollama pull llama3.2:3b # Lightweight model for weak PCs
ollama pull deepseek-r1 # For analytical tasks
ollama pull gemma2:2b # For Mafia games
ollama pull mistral:7b # Universal modelNote: The qwen2.5:14b model requires ~8 GB RAM. For systems with less memory, use qwen2.5:7b or llama3.2:3b.
Step 5: Install ColAI
- Extract the project archive
- Open terminal in the project folder
- Install dependencies (if required):
npm installNote: ColAI uses native ES modules and can work without npm by opening index.html directly in the browser. However, for better compatibility, using a local server is recommended.
Step 6: Start Local Server (Optional)
To run via local server:
# Using Python (if installed)
python -m http.server 8000
# Or using Node.js http-server
npx http-server -p 8000
# Or using PHP
php -S localhost:8000Then open in browser: http://localhost:8000
Alternative: You can open index.html directly in the browser (Chrome, Firefox, Edge).
🎮 Usage
First Launch
- Ensure Ollama is running: · Open terminal and run ollama serve · Or check that the Ollama process is running
- Open ColAI in browser
- Configure model: · In the "Ollama Model" field, enter the model name (e.g.: qwen2.5:14b) · Click "Check Connection" to verify Ollama availability · Ensure the model is downloaded: ollama pull qwen2.5:14b
- Start working: · Enter project name · Describe discussion topic · Configure parameters (temperature, tokens, etc.) · Click "Start Collaboration"
Collaborative Mode
- Project Setup: · Enter project name · Describe discussion topic · Upload files if needed (images, PDF, text)
- Network Selection: · Select which neural networks participate (up to 8 networks) · Each network has its own specialization: · Analytical Network: Critical analysis · Creative Network: Creative thinking · Implementation Network: Practical implementation · Data Science Network: Data analysis · Ethical Network: Ethical issues · User Experience Network: User experience · Systems Thinking Network: Systems thinking · Devil's Advocate Network: Critical validation
- Parameter Configuration: · Temperature: Controls response randomness (0.0-2.0) · Max Tokens: Maximum response length · Top P: Diversity control · Iterations: Number of discussion iterations
- Start Discussion: · Click "Start Collaboration" · Networks will start discussing the topic in turns · After each round, a summary is created · Networks vote to accept the summary · Process repeats until all iterations are completed
Mafia Mode
- Navigate to Mafia mode through navigation menu
- Configure game: · Number of players (4-8) · Number of mafia members · Number of discussion rounds · Game language (Russian/English)
- Click "Start Game"
- Game proceeds through day and night phases
- You can view each player's "thoughts"
⚙️ Configuration
Model Selection
When starting a project, enter the Ollama model name in the "Ollama Model" field. Examples:
· qwen2.5:14b - Recommended model (14B parameters) · qwen2.5:7b - Lightweight version (7B parameters) · llama3.2:3b - Lightweight model for weak PCs · deepseek-r1 - For analytical tasks · mistral:7b - Universal model
The model is saved in localStorage and will be used on next launch.
Model Parameter Configuration
In the "Model Settings" section, you can configure:
· System Prompt Template: System prompt template for networks · Temperature: Generation temperature (0.0-2.0) · Max Tokens: Maximum tokens in response · Top P: Diversity parameter (0.0-1.0) · Presence Penalty: Penalty for topic repetition · Frequency Penalty: Penalty for phrase repetition
Configuration for Different Models
For Large Models (14B+):
· Use more RAM (16+ GB) · Increase max_tokens for longer responses · Decrease temperature for more deterministic responses
For Small Models (3B-7B):
· Decrease max_tokens (500-1000) · Increase temperature for more creative responses · Use fewer networks simultaneously
🔧 Troubleshooting
Ollama Not Connecting
Problem: "Ollama not available"
Solutions:
- Ensure Ollama is running: ollama serve
- Check if Ollama is accessible: open http://localhost:11434/api/tags in browser
- Restart Ollama
- Check if firewall is blocking port 11434
Model Not Found
Problem: "Model not found"
Solutions:
- Check model list: ollama list
- Download model: ollama pull <model_name>
- Ensure model name is entered correctly (including tag, e.g.: qwen2.5:14b)
Slow Performance
Problem: Models respond very slowly
Solutions:
- Use a smaller model (e.g., qwen2.5:7b instead of qwen2.5:14b)
- Decrease max_tokens in settings
- Use GPU acceleration (install CUDA for NVIDIA GPU)
- Close other applications, free up RAM
- Use fewer networks simultaneously
Memory Errors
Problem: "Out of memory" or browser freezes
Solutions:
- Use a smaller model
- Reduce number of active networks
- Close other browser tabs
- Increase virtual memory (Windows) or swap (Linux/macOS)
CORS Errors
Problem: CORS errors when accessing Ollama
Solutions:
- Ensure you're opening via http://localhost or local server
- Don't open index.html directly via file://
- Use local web server (see Installation Step 6)
📚 Additional Information
Supported File Formats
· Images: JPG, PNG, GIF, WebP · Documents: PDF, DOCX, TXT, CSV
Interface Languages
Interface supports multiple languages including:
· Russian · English · Español · Français · Deutsch · And many more...
Data Storage
All settings and selected model are saved in browser's localStorage. Discussions can also be saved locally.
Result Export
Discussion results can be exported in formats:
· JSON · TXT · HTML
🛠️ Development
Project Structure
ColAI-master/
├── app.js # Main application file
├── index.html # HTML interface
├── styles.css # Styles
├── darkModeManager.js # Dark theme management
└── modules/
├── framework.js # Main framework
├── networkManager.js # Network management
├── ollamaManager.js # Ollama manager
├── mafiaMode.js # Mafia mode
├── mafiaAiClient.js # AI client for Mafia
└── ... # Other modules
Technologies
· Frontend: Vanilla JavaScript (ES6+ modules) · Backend: Ollama (local) · Styling: CSS3 with variables · Markup: HTML5
📝 License
See LICENSE file in project root.
🤝 Support
If you encounter problems:
- Check the "Troubleshooting" section
- Ensure Ollama is installed and running
- Check that model is downloaded: ollama list
- Check browser logs (F12 → Console)
🎯 Performance Recommendations
For Optimal Performance:
- Use GPU: Install CUDA for NVIDIA GPU
- Choose Suitable Model: · For weak PCs: llama3.2:3b or qwen2.5:7b · For medium PCs: qwen2.5:14b · For powerful PCs: qwen2.5:32b or llama3.1:70b
- Configure Parameters: Decrease max_tokens for faster responses
- Use Fewer Networks: 2-4 networks work faster than 8
🔄 Updates
To update ColAI:
- Download new version
- Replace project files
- Update Ollama models: ollama pull
To update models:
ollama pull qwen2.5:14b # Updates model to latest versionEnjoy using ColAI! 🚀
If you have questions or suggestions, create an issue in the project repository.