Learn how to integrate nano-claw with various chat platforms and services.
Set up nano-claw as a Telegram bot to chat with your AI assistant on the go.
- A Telegram account
- nano-claw installed and configured
- Basic nano-claw configuration (API key set up)
- Open Telegram and search for @BotFather
- Start a chat and send
/newbot - Follow the prompts to choose a name and username for your bot
- BotFather will provide you with a token like:
123456789:ABCdefGHIjklMNOpqrsTUVwxyz
Example conversation with BotFather:
You: /newbot
BotFather: Alright, a new bot. How are we going to call it? Please choose a name for your bot.
You: My nano-claw Bot
BotFather: Good. Now let's choose a username for your bot. It must end in `bot`.
You: my_nanoclaw_bot
BotFather: Done! Congratulations on your new bot. You will find it at t.me/my_nanoclaw_bot.
You can now add a description...
Here is your token: 123456789:ABCdefGHIjklMNOpqrsTUVwxyz
Edit ~/.nano-claw/config.json and add the Telegram configuration:
{
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-YOUR_KEY_HERE"
}
},
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-5",
"temperature": 0.7,
"maxTokens": 4096
}
},
"channels": {
"telegram": {
"enabled": true,
"token": "123456789:ABCdefGHIjklMNOpqrsTUVwxyz",
"allowFrom": ["your_telegram_username"]
}
}
}Configuration Options:
enabled: Set totrueto activate the Telegram channeltoken: Your bot token from BotFatherallowFrom: (Optional) Array of Telegram usernames that can use the bot. Leave empty for public access.
The gateway server manages all chat channel connections:
nano-claw gatewayExpected Output:
🚀 Starting nano-claw Gateway Server...
✓ Telegram channel initialized
Bot Username: @my_nanoclaw_bot
Allowed Users: your_telegram_username
🌐 Gateway server running
Press Ctrl+C to stop
- Open Telegram
- Search for your bot by username (e.g.,
@my_nanoclaw_bot) - Start a conversation with
/start - Send any message to chat with nano-claw!
Example Conversation:
You: /start
Bot: Hello! I'm your nano-claw assistant. How can I help you today?
You: What's the weather like today?
Bot: I don't have real-time internet access, but I can help you find weather
information. What's your location?
You: Can you write a Python function for me?
Bot: Of course! What should the function do?
You: Calculate the sum of a list of numbers
Bot: Here's a Python function that calculates the sum of a list:
```python
def sum_list(numbers: list[float]) -> float:
"""Calculate the sum of numbers in a list."""
return sum(numbers)
# Example usage:
my_numbers = [1, 2, 3, 4, 5]
result = sum_list(my_numbers)
print(result) # Output: 15
```
- Use
allowFrom: Restrict bot access to specific users - Keep Token Secret: Never commit your bot token to version control
- Use Environment Variables: Store the token in an environment variable:
export TELEGRAM_BOT_TOKEN="123456789:ABCdefGHIjklMNOpqrsTUVwxyz"
nano-claw gatewayConnect nano-claw to Discord to create a server assistant.
- A Discord account
- Administrator access to a Discord server
- nano-claw installed and configured
- Go to Discord Developer Portal
- Click "New Application"
- Give it a name (e.g., "nano-claw Bot")
- Click "Create"
- In your application, go to the "Bot" section
- Click "Add Bot"
- Confirm by clicking "Yes, do it!"
- Under "Token", click "Copy" to copy your bot token
- Enable these Privileged Gateway Intents:
- Presence Intent
- Server Members Intent
- Message Content Intent
- Go to "OAuth2" → "URL Generator"
- Select scopes:
- ✅
bot - ✅
applications.commands
- ✅
- Select bot permissions:
- ✅ Read Messages/View Channels
- ✅ Send Messages
- ✅ Read Message History
- Copy the generated URL and open it in your browser
- Select your server and authorize the bot
Edit ~/.nano-claw/config.json:
{
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-YOUR_KEY_HERE"
}
},
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-5"
}
},
"channels": {
"discord": {
"enabled": true,
"token": "YOUR_DISCORD_BOT_TOKEN",
"allowFrom": ["USER_ID_1", "USER_ID_2"]
}
}
}Getting User IDs:
- Enable Developer Mode in Discord (User Settings → Advanced → Developer Mode)
- Right-click on a user and select "Copy ID"
nano-claw gatewayExpected Output:
🚀 Starting nano-claw Gateway Server...
✓ Discord channel initialized
Bot: nano-claw Bot#1234
Servers: 1
🌐 Gateway server running
Press Ctrl+C to stop
In your Discord server:
- Mention the bot:
@nano-claw help me with something - Direct message: Send a DM to the bot
Example Interaction:
User: @nano-claw What is TypeScript?
nano-claw Bot: TypeScript is a strongly-typed programming language
that builds on JavaScript. It adds optional static typing, which
helps catch errors during development and provides better IDE support.
User: @nano-claw Can you create a simple Express server example?
nano-claw Bot: Sure! Here's a simple Express server in TypeScript:
...
Use multiple LLM providers for redundancy and flexibility.
Configure multiple providers in ~/.nano-claw/config.json:
{
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-YOUR_KEY_HERE"
},
"anthropic": {
"apiKey": "sk-ant-YOUR_KEY_HERE"
},
"openai": {
"apiKey": "sk-YOUR_KEY_HERE"
},
"groq": {
"apiKey": "gsk_YOUR_KEY_HERE"
},
"deepseek": {
"apiKey": "YOUR_KEY_HERE"
}
},
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-5",
"temperature": 0.7,
"maxTokens": 4096
}
}
}nano-claw automatically selects the correct provider based on the model name:
# Uses OpenRouter
nano-claw agent -m "Hello" --model "anthropic/claude-opus-4-5"
# Uses Anthropic directly
nano-claw agent -m "Hello" --model "claude-opus-4-5"
# Uses OpenAI
nano-claw agent -m "Hello" --model "gpt-4-turbo"
# Uses Groq
nano-claw agent -m "Hello" --model "mixtral-8x7b-32768"| Model Name Pattern | Provider Used |
|---|---|
anthropic/* |
OpenRouter |
openai/* |
OpenRouter |
claude-* |
Anthropic |
gpt-* |
OpenAI |
mixtral-* |
Groq |
deepseek-* |
DeepSeek |
- Redundancy: Fallback if one provider is down
- Cost Optimization: Use cheaper models for simple tasks
- Feature Access: Access provider-specific models
- Testing: Compare different models easily
Run nano-claw with local LLM models using vLLM for complete privacy and control.
- Python 3.8+
- CUDA-compatible GPU (recommended)
- Sufficient VRAM (depends on model size)
- Docker (optional, for easier setup)
docker run --gpus all \
-p 8000:8000 \
vllm/vllm-openai:latest \
--model mistralai/Mistral-7B-Instruct-v0.2For larger models:
# Llama 3 8B
docker run --gpus all \
-p 8000:8000 \
vllm/vllm-openai:latest \
--model meta-llama/Meta-Llama-3-8B-Instruct
# Mixtral 8x7B (requires ~90GB VRAM)
docker run --gpus all \
-p 8000:8000 \
vllm/vllm-openai:latest \
--model mistralai/Mixtral-8x7B-Instruct-v0.1pip install vllmpython -m vllm.entrypoints.openai.api_server \
--model mistralai/Mistral-7B-Instruct-v0.2 \
--port 8000Edit ~/.nano-claw/config.json:
{
"providers": {
"vllm": {
"apiBase": "http://localhost:8000/v1"
}
},
"agents": {
"defaults": {
"model": "mistralai/Mistral-7B-Instruct-v0.2",
"temperature": 0.7,
"maxTokens": 2048
}
}
}# Check if vLLM is running
curl http://localhost:8000/v1/models
# Use with nano-claw
nano-claw agent -m "Hello! Can you introduce yourself?"Expected Output:
🤖 Agent: Hello! I'm an AI assistant running on your local machine using the
Mistral-7B model. I can help you with various tasks while keeping all your data
private and secure. How can I assist you today?
-
GPU Selection: Use specific GPUs if you have multiple
CUDA_VISIBLE_DEVICES=0 python -m vllm.entrypoints.openai.api_server ...
-
Batch Size: Adjust for better throughput
python -m vllm.entrypoints.openai.api_server \ --model mistralai/Mistral-7B-Instruct-v0.2 \ --max-num-seqs 256
-
Quantization: Use quantized models for less VRAM
python -m vllm.entrypoints.openai.api_server \ --model TheBloke/Mistral-7B-Instruct-v0.2-GPTQ \ --quantization gptq
| Model | Size | VRAM | Best For |
|---|---|---|---|
| Mistral-7B | 7B | ~16GB | General use |
| Llama-3-8B | 8B | ~18GB | Reasoning |
| Mixtral-8x7B | 47B | ~90GB | Complex tasks |
| CodeLlama-7B | 7B | ~16GB | Coding |
| Phi-2 | 2.7B | ~8GB | Fast responses |
- ✅ Complete Privacy: All data stays on your machine
- ✅ No API Costs: Free inference after initial setup
- ✅ No Rate Limits: Use as much as you want
- ✅ Offline Operation: Works without internet
- ✅ Customization: Fine-tune models for your needs
Out of Memory Error:
- Use a smaller model
- Enable quantization
- Reduce max sequence length
- Close other GPU applications
Slow Response:
- Check GPU utilization:
nvidia-smi - Increase batch size
- Use a smaller model
- Consider using CPU offloading
Use everything together for maximum flexibility:
{
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-xxx"
},
"anthropic": {
"apiKey": "sk-ant-xxx"
},
"vllm": {
"apiBase": "http://localhost:8000/v1"
}
},
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-5",
"temperature": 0.7,
"maxTokens": 4096
}
},
"channels": {
"telegram": {
"enabled": true,
"token": "TELEGRAM_TOKEN"
},
"discord": {
"enabled": true,
"token": "DISCORD_TOKEN"
}
}
}This setup allows you to:
- Chat via Telegram and Discord
- Use cloud models for complex tasks
- Fall back to local models for privacy-sensitive tasks
- Have full redundancy and flexibility
- Advanced Features - Custom skills and automation
- Use Case Scenarios - Real-world examples
- Configuration Guide - Complete reference
- Basic Usage - Getting started guide