This project is the foundation of an AI-powered chatbot built using a Large Language Model (LLM) API.
By Day 7, it will evolve into a command-line chatbot with:
- Token usage tracking
- Cost awareness
- Structured JSON outputs
- Optimized prompt handling
- Environment-based configuration
The goal is to deeply understand how LLMs work in real-world applications, including API interaction, context management, and cost control.
Google Gemini
(The provider may later change to Anthropic or Google Gemini for comparison and experimentation.)
- Python 3.10+
- OpenAI / Gemini SDK (provider-dependent)
python-dotenvfor environment variables- Command-line interface (CLI)
git clone https://github.com/wandilemawelela/ai-engineer-project-1-llm-chatbot.git
cd ai-engineer-project-1-llm-chatbotpython -m venv venv
source venv/bin/activate # Linux / macOSpip install -r requirements.txtcp .env.example .envOpen .env and add your API key:
OPENAI_API_KEY=your_api_key_hereor (if using Gemini later):
GOOGLE_API_KEY=your_api_key_here- DO NOT commit
.envto GitHub - Ensure
.envis listed in.gitignore - API keys should never be hardcoded in source files
Example .gitignore entry:
.env
venv/python main.pyBy the end of this project, the chatbot will include:
- LLM API integration
- Token usage tracking
- Cost estimation per request
- Structured JSON responses
- Optimized system and user prompts
- Clean CLI interface
This project focuses on:
- Understanding how LLM APIs work
- Managing context windows and tokens
- Secure API key handling
- Designing scalable AI-powered applications
- Provider switching (OpenAI ↔ Gemini ↔ Anthropic)
- Conversation memory
- Streaming responses
- Logging & analytics
- Web interface