Sync your fork regularly — the lab gets updated.
Build a Telegram bot that lets users interact with the LMS backend through chat. Users should be able to check system health, browse labs and scores, and ask questions in plain language. The bot should use an LLM to understand what the user wants and fetch the right data. Deploy it alongside the existing backend on the VM.
This is what a customer might tell you. Your job is to turn it into a working product using an AI coding agent (Qwen Code) as your development partner.
┌──────────────────────────────────────────────────────────────┐
│ │
│ ┌──────────────┐ ┌──────────────────────────────────┐ │
│ │ Telegram │────▶│ Your Bot │ │
│ │ User │◀────│ (aiogram / python-telegram-bot) │ │
│ └──────────────┘ └──────┬───────────────────────────┘ │
│ │ │
│ │ slash commands + plain text │
│ ├───────▶ /start, /help │
│ ├───────▶ /health, /labs │
│ ├───────▶ intent router ──▶ LLM │
│ │ │ │
│ │ ▼ │
│ ┌──────────────┐ ┌──────┴───────┐ tools/actions │
│ │ Docker │ │ LMS Backend │◀───── GET /items │
│ │ Compose │ │ (FastAPI) │◀───── GET /analytics │
│ │ │ │ + PostgreSQL│◀───── POST /sync │
│ └──────────────┘ └──────────────┘ │
└──────────────────────────────────────────────────────────────┘
- Testable handler architecture — handlers work without Telegram
- CLI test mode:
cd bot && uv run bot.py --test "/command"prints response to stdout /start— welcome message/help— lists all available commands/health— calls backend, reports up/down status/labs— lists available labs/scores <lab>— per-task pass rates- Error handling — backend down produces a friendly message, not a crash
- Natural language intent routing — plain text interpreted by LLM
- All 9 backend endpoints wrapped as LLM tools
- Inline keyboard buttons for common actions
- Multi-step reasoning (LLM chains multiple API calls)
- Rich formatting (tables, charts as images)
- Response caching
- Conversation context (multi-turn)
- Bot containerized with Dockerfile
- Added as service in
docker-compose.yml - Deployed and running on VM
- README documents deployment
Notice the progression above: product brief (vague customer ask) → prioritized requirements (structured) → task specifications (precise deliverables + acceptance criteria). This is how engineering work flows.
You are not following step-by-step instructions — you are building a product with an AI coding agent. The learning comes from planning, building, testing, and debugging iteratively.
By the end of this lab, you should be able to say:
- I turned a vague product brief into a working Telegram bot.
- I can ask it questions in plain language and it fetches the right data.
- I used an AI coding agent to plan and build the whole thing.
- Complete the lab setup
Note: First time in this course? Do the full setup instead.
- Plan and Scaffold — P0: project structure +
--testmode - Backend Integration — P0: slash commands + real data
- Intent-Based Natural Language Routing — P1: LLM tool use
- Containerize and Document — P3: containerize + deploy
Before deploying, ensure you have:
.env.docker.secretwith backend configuration.env.bot.secretwith bot credentials (BOT_TOKEN,LMS_API_KEY,LLM_API_KEY,LLM_API_BASE_URL,LLM_API_MODEL)
The bot requires these environment variables (set in .env.docker.secret for Docker):
| Variable | Description | Example |
|---|---|---|
BOT_TOKEN |
Telegram bot token from @BotFather | 123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11 |
LMS_API_BASE_URL |
Backend URL (Docker uses service name) | http://backend:8000 |
LMS_API_KEY |
LMS API authentication key | my-secret-api-key |
LLM_API_BASE_URL |
LLM API endpoint | http://host.docker.internal:42005/v1 |
LLM_API_KEY |
LLM API authentication key | my-secret-api-key |
LLM_API_MODEL |
LLM model name | coder-model |
Note on Docker networking: Inside Docker,
localhostrefers to the container itself. The bot must usehttp://backend:8000to reach the backend (Docker service name), andhttp://host.docker.internal:42005to reach the LLM proxy running on the host.
-
Stop any running bot process (from previous nohup deployment):
cd ~/se-toolkit-lab-7 pkill -f "bot.py" 2>/dev/null
-
Build and start all services:
docker compose --env-file .env.docker.secret up --build -d
-
Verify services are running:
docker compose --env-file .env.docker.secret ps
You should see
bot,backend,postgres,caddy, andpgadminall with status "running". -
Check bot logs:
docker compose --env-file .env.docker.secret logs bot --tail 30
Look for:
- "Starting Telegram bot..." — bot started
- "HTTP Request: POST .../getUpdates" — bot is polling
- No Python tracebacks
-
Backend health check:
curl -sf http://localhost:42002/docs
Should return HTML (Swagger UI).
-
Test in Telegram:
- Send
/start— should receive welcome message with inline keyboard buttons - Send
/health— should show backend status - Send "what labs are available?" — should list labs (LLM-powered)
- Send "which lab has the lowest pass rate?" — should compare all labs
- Send
| Problem | Solution |
|---|---|
| Bot container restarting | Check logs: docker compose logs bot. Usually missing env var or import error. |
/health fails |
Ensure LMS_API_BASE_URL=http://backend:8000 (not localhost). |
| LLM queries fail | Use host.docker.internal in LLM_API_BASE_URL to reach host network. |
| "BOT_TOKEN is required" | Add BOT_TOKEN to .env.docker.secret. |
Build fails at uv sync |
Ensure uv.lock is copied in Dockerfile. |
After pulling new code:
cd ~/se-toolkit-lab-7
git pull
docker compose --env-file .env.docker.secret up --build -d