A lightweight multi-agent system in Python that demonstrates collaboration among agents, each powered by a different Large Language Model (LLM) client. This project showcases how multiple AI agents can interact, delegate tasks, and generate insights through inter-agent communication.
- 🧩 Modular agent architecture
- 🔁 Multi-agent communication and collaboration
- 🤖 Integration with multiple LLM clients (e.g., OpenAI, Claude, Gemini)
- 📦 Easily extendable for additional agents or tasks
multi-agent-app/
|- agent.py # Defines Agent class and its behavior
|- llmclients.py # Abstractions for connecting to various LLMs
|- main.py # Orchestrates agent creation and execution
|- README.md
Each agent is initialized with a specific LLM client (like GPT-4, Claude, Gemini). These agents can be given individual tasks or work collaboratively on a problem by exchanging responses and augmenting each other’s outputs.
- Create multiple agents using different LLM APIs.
- Assign a shared goal or problem.
- Let agents respond, reference each other’s answers, and refine outputs in a loop.
- Clone the repository
git clone https://github.com/Tripathiaman2511/multi-agent-app.git cd multi-agent-app - Install dependencies
pip install -r requirements.txt
- Add API Keys
OPENAI_API_KEY=your_openai_key CLAUDE_API_KEY=your_claude_key GEMINI_API_KEY=your_gemini_key
- Run the application
python main.py
You can easily add more agents or LLMs by modifying:
- llmclients.py: Add new client wrappers
- main.py: Instantiate additional agents with different roles
- agent.py: Add memory, tool use, or agent capabilities
- 🧠 AI task orchestration and brainstorming
- 📊 Collaborative summarization or report generation
- 🛠️ Auto-documentation or code review bots
- 🕵️ Competitive analysis from multiple LLMs