Skip to content

sweetlife999/se-toolkit-lab-7

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3,173 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Lab 7 — Build a Client with an AI Coding Agent

Sync your fork regularly — the lab gets updated.

Product brief

Build a Telegram bot that lets users interact with the LMS backend through chat. Users should be able to check system health, browse labs and scores, and ask questions in plain language. The bot should use an LLM to understand what the user wants and fetch the right data. Deploy it alongside the existing backend on the VM.

This is what a customer might tell you. Your job is to turn it into a working product using an AI coding agent (Qwen Code) as your development partner.

┌──────────────────────────────────────────────────────────────┐
│                                                              │
│  ┌──────────────┐     ┌──────────────────────────────────┐   │
│  │  Telegram    │────▶│  Your Bot                        │   │
│  │  User        │◀────│  (aiogram / python-telegram-bot) │   │
│  └──────────────┘     └──────┬───────────────────────────┘   │
│                              │                               │
│                              │ slash commands + plain text    │
│                              ├───────▶ /start, /help         │
│                              ├───────▶ /health, /labs        │
│                              ├───────▶ intent router ──▶ LLM │
│                              │                    │          │
│                              │                    ▼          │
│  ┌──────────────┐     ┌──────┴───────┐    tools/actions      │
│  │  Docker      │     │  LMS Backend │◀───── GET /items      │
│  │  Compose     │     │  (FastAPI)   │◀───── GET /analytics  │
│  │              │     │  + PostgreSQL│◀───── POST /sync      │
│  └──────────────┘     └──────────────┘                       │
└──────────────────────────────────────────────────────────────┘

Requirements

P0 — Must have

  1. Testable handler architecture — handlers work without Telegram
  2. CLI test mode: cd bot && uv run bot.py --test "/command" prints response to stdout
  3. /start — welcome message
  4. /help — lists all available commands
  5. /health — calls backend, reports up/down status
  6. /labs — lists available labs
  7. /scores <lab> — per-task pass rates
  8. Error handling — backend down produces a friendly message, not a crash

P1 — Should have

  1. Natural language intent routing — plain text interpreted by LLM
  2. All 9 backend endpoints wrapped as LLM tools
  3. Inline keyboard buttons for common actions
  4. Multi-step reasoning (LLM chains multiple API calls)

P2 — Nice to have

  1. Rich formatting (tables, charts as images)
  2. Response caching
  3. Conversation context (multi-turn)

P3 — Deployment

  1. Bot containerized with Dockerfile
  2. Added as service in docker-compose.yml
  3. Deployed and running on VM
  4. README documents deployment

Learning advice

Notice the progression above: product brief (vague customer ask) → prioritized requirements (structured) → task specifications (precise deliverables + acceptance criteria). This is how engineering work flows.

You are not following step-by-step instructions — you are building a product with an AI coding agent. The learning comes from planning, building, testing, and debugging iteratively.

Learning outcomes

By the end of this lab, you should be able to say:

  1. I turned a vague product brief into a working Telegram bot.
  2. I can ask it questions in plain language and it fetches the right data.
  3. I used an AI coding agent to plan and build the whole thing.

Tasks

Prerequisites

  1. Complete the lab setup

Note: First time in this course? Do the full setup instead.

Required

  1. Plan and Scaffold — P0: project structure + --test mode
  2. Backend Integration — P0: slash commands + real data
  3. Intent-Based Natural Language Routing — P1: LLM tool use
  4. Containerize and Document — P3: containerize + deploy

Optional

  1. Flutter Web Chatbot

Deploy

Prerequisites

Before deploying, ensure you have:

  • .env.docker.secret with backend configuration
  • .env.bot.secret with bot credentials (BOT_TOKEN, LMS_API_KEY, LLM_API_KEY, LLM_API_BASE_URL, LLM_API_MODEL)

Environment variables

The bot requires these environment variables (set in .env.docker.secret for Docker):

Variable Description Example
BOT_TOKEN Telegram bot token from @BotFather 123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11
LMS_API_BASE_URL Backend URL (Docker uses service name) http://backend:8000
LMS_API_KEY LMS API authentication key my-secret-api-key
LLM_API_BASE_URL LLM API endpoint http://host.docker.internal:42005/v1
LLM_API_KEY LLM API authentication key my-secret-api-key
LLM_API_MODEL LLM model name coder-model

Note on Docker networking: Inside Docker, localhost refers to the container itself. The bot must use http://backend:8000 to reach the backend (Docker service name), and http://host.docker.internal:42005 to reach the LLM proxy running on the host.

Deploy commands

  1. Stop any running bot process (from previous nohup deployment):

    cd ~/se-toolkit-lab-7
    pkill -f "bot.py" 2>/dev/null
  2. Build and start all services:

    docker compose --env-file .env.docker.secret up --build -d
  3. Verify services are running:

    docker compose --env-file .env.docker.secret ps

    You should see bot, backend, postgres, caddy, and pgadmin all with status "running".

  4. Check bot logs:

    docker compose --env-file .env.docker.secret logs bot --tail 30

    Look for:

    • "Starting Telegram bot..." — bot started
    • "HTTP Request: POST .../getUpdates" — bot is polling
    • No Python tracebacks

Verify deployment

  1. Backend health check:

    curl -sf http://localhost:42002/docs

    Should return HTML (Swagger UI).

  2. Test in Telegram:

    • Send /start — should receive welcome message with inline keyboard buttons
    • Send /health — should show backend status
    • Send "what labs are available?" — should list labs (LLM-powered)
    • Send "which lab has the lowest pass rate?" — should compare all labs

Troubleshooting

Problem Solution
Bot container restarting Check logs: docker compose logs bot. Usually missing env var or import error.
/health fails Ensure LMS_API_BASE_URL=http://backend:8000 (not localhost).
LLM queries fail Use host.docker.internal in LLM_API_BASE_URL to reach host network.
"BOT_TOKEN is required" Add BOT_TOKEN to .env.docker.secret.
Build fails at uv sync Ensure uv.lock is copied in Dockerfile.

Update deployment

After pulling new code:

cd ~/se-toolkit-lab-7
git pull
docker compose --env-file .env.docker.secret up --build -d

About

Lab 7: Telegram Bot

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 84.1%
  • TypeScript 8.6%
  • Dockerfile 2.7%
  • Nix 2.4%
  • CSS 1.3%
  • JavaScript 0.6%
  • Other 0.3%