A multi-platform conversational bot (Discord & Telegram) powered by IO Intelligence API with advanced message processing and shared conversation context management.
- 🤖 LLM Integration: Uses
meta-llama/Llama-3.3-70B-Instructvia IO Intelligence API - 🔄 Multi-Platform: Discord and Telegram support with shared conversation context
- 💬 Context Awareness: Maintains conversation history per channel/chat
- ⚡ Async Processing: Queue-based message processing with flow control
- 🔧 Flexible Deployment: Run Discord only, Telegram only, or both simultaneously
- 🛡️ Robust: Error handling, rate limiting, and automatic cleanup
git clone https://github.com/ionet-official/io-chatbot
cd io-chatbotCopy the example environment file and edit with your credentials:
cp .env.example .env
# Edit .env with your tokensRequired variables:
# At least one platform token is required
API_KEY=your_intelligence_io_api_key_here
# Platform tokens (provide one or both)
DISCORD_TOKEN=your_discord_bot_token_here
TELEGRAM_TOKEN=your_telegram_bot_token_heredocker build -t io-chatbot .
docker run --env-file .env io-chatbotpip install -r requirements.txt
python main.py
- Go to Discord Developer Portal
- Create a new application
- Go to
Botsection - Create a bot and copy the token
- Enable
Message Content Intentin Bot settings - Invite bot to your server with appropriate permissions
- Message @BotFather on Telegram
- Send
/newbotcommand - Follow the instructions to create your bot
- Copy the bot token provided
- Optionally set bot commands with
/setcommands
- Visit IO Intelligence
- Sign up/login to your account
- Generate an API key
The Discord bot responds to:
- Mentions:
@IO Chat hello there! - Direct Messages: Send DM to the bot
- Replies: Reply to any bot message
Commands:
!io help- Show help information!io status- Display bot status and uptime!io clear- Clear conversation context for current channel
The Telegram bot responds to:
- Private Messages: Send any message directly to the bot in private chat
- Mentions:
@botusername hello there!in group chats - Replies: Reply to any bot message in group chats
Commands:
/help- Show help information/status- Display bot status and uptime/clear- Clear conversation context for current chat
- Shared Context: Conversations are managed independently per platform
- Same LLM Backend: Consistent AI responses across platforms
- Unified Processing: Same message processing pipeline for both platforms
You can customize bot behavior by adding these to your .env file:
# Context and Processing
MAX_CONTEXT_MESSAGES=20 # Messages to keep in context
MESSAGE_BATCH_SIZE=5 # Messages processed per batch
PROCESSING_TIMEOUT=25.0 # Max seconds to wait for response
RATE_LIMIT_DELAY=0.5 # Delay between API calls
MAX_RESPONSE_LENGTH=2000 # Max characters in bot response
# Cleanup
CONTEXT_CLEANUP_INTERVAL=300 # Seconds between context cleanup
# Bot Behavior
SYSTEM_PROMPT="Your custom system prompt here" # Override default system prompt
# Logging
LOG_LEVEL=DEBUG # Logging level: DEBUG, INFO, WARNING, ERROR, CRITICALYou can customize the bot's behavior by setting a custom system prompt in your .env file:
SYSTEM_PROMPT="Your custom system prompt here"For long prompts, use escaped strings with \n for line breaks:
SYSTEM_PROMPT="You are a helpful assistant.\nYou should be friendly and professional.\n\nAlways provide accurate information."- LLMClient: Handles API communication with Intelligence.io
- MessageProcessor: Manages queues, locks, and batch processing (shared between platforms)
- ConversationContext: Maintains chat history per channel/chat
- DiscordBot: Discord bot implementation with command handling
- TelegramBot: Telegram bot implementation with command handling
- BotManager: Coordinates both platforms with shared components
- User sends message → Bot detects mention/DM/reply
- Message added to channel-specific queue
- Batch processor collects messages
- Context prepared with conversation history
- LLM generates response
- Response sent to Discord channel
io-chat-bot/
├── main.py # Main entry point
├── app/ # Application modules
│ ├── __init__.py # Package initialization
│ ├── config.py # Configuration and environment variables
│ ├── models.py # Data models (Message, ConversationContext)
│ ├── llm_client.py # LLM API client
│ ├── message_processor.py # Message processing logic
│ ├── discord.py # Discord bot implementation
│ └── telegram.py # Telegram bot implementation
├── tests/ # Unit tests
│ ├── __init__.py # Test package initialization
│ ├── conftest.py # Pytest configuration and fixtures
│ ├── test_models.py # Tests for models module
│ ├── test_config.py # Tests for config module
│ ├── test_llm_client.py # Tests for LLM client
│ ├── test_message_processor.py # Tests for message processor
│ ├── test_discord.py # Tests for Discord bot
│ └── test_telegram.py # Tests for Telegram bot
├── requirements.txt # Python dependencies
├── pytest.ini # Pytest configuration
├── Dockerfile # Docker container config
├── .dockerignore # Docker ignore rules
├── .env.example # Environment template
├── .env # Your configuration (create this)
└── README.md # This file
The bot is designed for easy extension:
- Tool Integration: Add tool calling in
app/llm_client.py - New Platforms: Create new bot implementations following
app/discord.pyorapp/telegram.pypatterns - Custom Commands: Add methods with
@commands.command()decorator in respective bot files - New Features: Extend
MessageProcessorinapp/message_processor.py
The project includes comprehensive unit tests following Python best practices:
# Run all tests
pytest
# Run tests with coverage report
pytest --cov=app --cov-report=html
# Run specific test file
pytest tests/test_models.py
# Run tests matching a pattern
pytest -k "test_message"
# Run tests with verbose output
pytest -v
# Run tests and generate coverage report
pytest --cov=app --cov-report=term-missingTest Coverage:
- Models: Data classes and business logic
- Config: Environment variable handling
- LLM Client: API communication and error handling
- Message Processor: Queue management and message flow
- Discord Bot: Command handling and message processing
- Telegram Bot: Update handling and response generation
Test Features:
- Async Testing: Full support for async/await patterns
- Mocking: Comprehensive mocking of external dependencies
- Fixtures: Reusable test data and setup
- Coverage: 80%+ code coverage requirement
- CI Ready: Configured for continuous integration
Logs are written to console. Configure log level via environment variable:
# In your .env file
LOG_LEVEL=INFO # Options: DEBUG, INFO, WARNING, ERROR, CRITICALBot doesn't respond:
- Check Discord permissions (
Read Messages,Send Messages) - Verify
Message Content Intentis enabled - Check logs for API errors
API errors:
- Verify
API_KEYis correct - Check
API_BASE_URLendpoint - Ensure sufficient API credits
Installation issues:
- Python 3.8+ required
- Install with:
pip install -r requirements.txt
Docker issues:
- Ensure Docker is installed
- Check
.envfile exists and has correct tokens - View logs:
docker logs <container_id>
Set logging to DEBUG for detailed information about message processing:
# In your .env file
LOG_LEVEL=DEBUGThen restart the bot to apply the new log level.
Debug logs include:
- Message queue operations and sizes
- Processing task lifecycle
- LLM API requests and responses
- Context management and cleanup
- Message flow and timing
- Response generation details
- Pull the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
MIT License - see LICENSE file for details.
For issues and questions:
- Check the troubleshooting section
- Open an issue on GitHub
Powered by IO Intelligence 🚀