An intelligent agentic system that takes a technology stack description and generates comprehensive IPython notebook demos using ADK orchestration and MCP servers.
- π§ Intelligent Parsing: Uses LLM to parse and categorize technologies from natural language descriptions
- π Multi-Source Discovery: Discovers documentation from Context7, DeepWiki, and Tavily MCP servers in parallel
- βοΈ LLM-as-a-Judge: Evaluates documentation quality and determines if more retrieval is needed
- π Notebook Generation: Creates comprehensive IPython notebooks with explanations and working code
- π ADK Orchestration: Uses Anthropic ADK framework for workflow management
- π¨ Rich CLI: Beautiful terminal interface with interactive feedback
- π§ Flexible LLM Support: Works with both Anthropic Claude and Google Gemini
User Query β Parser Agent β User Approval β Discovery Loop β Notebook Generator
β β
LLM Judge βββββββ
- Technology Parsing: Breaks down user's tech stack description into individual technologies
- User Approval: Presents parsed technologies for user review and feedback
- Parallel Discovery: Queries multiple MCP servers simultaneously for each technology:
- Context7: Library documentation and API references
- DeepWiki: GitHub repository documentation and wikis
- Tavily: Web search for additional context (optional)
- LLM-as-a-Judge: Evaluates if discovered data is sufficient (with retry loop)
- Notebook Generation: Creates integrated IPython notebook demo
- User Feedback: Collects satisfaction feedback
- Python 3.11+
- Cursor with MCP servers configured
- API keys for your chosen LLM provider
- Clone the repository:
cd /home/csaba/repos/AIML/WeaveHacks2/wandb_test- Install dependencies:
pip install -r requirements.txt- Configure environment:
cp .env.example .env
# Edit .env with your API keys- Ensure MCP servers are configured in
~/.cursor/mcp.json:
{
"mcpServers": {
"context7": {
"command": "npx",
"args": ["-y", "@upstash/context7-mcp"]
},
"deepwiki": {
"url": "https://mcp.deepwiki.com/sse"
}
}
}# LLM Provider ("anthropic" or "google")
LLM_PROVIDER=anthropic
# API Keys
ANTHROPIC_API_KEY=your_key_here
GOOGLE_API_KEY=your_key_here
# Model Names
ANTHROPIC_MODEL=claude-sonnet-4-20250514
GOOGLE_MODEL=gemini-2.0-flash-exp
# MCP Server Configuration
CONTEXT7_ENABLED=true
DEEPWIKI_ENABLED=true
TAVILY_ENABLED=false
TAVILY_API_KEY=your_tavily_key_here
# Workflow Configuration
MAX_RETRIEVAL_ITERATIONS=3
PARALLEL_DISCOVERY=truepython run.pyThe CLI will guide you through:
- Describing your technology stack
- Reviewing parsed technologies
- Approving or providing feedback
- Watching the discovery and generation process
- Receiving your generated notebook
"I want to build a web app with React, FastAPI, and PostgreSQL"
"Create a machine learning pipeline with PyTorch, Pandas, and MLflow"
"A microservices architecture with Docker, Kubernetes, and MongoDB"
"Build a data dashboard using Streamlit, Plotly, and SQLite"
Generated notebooks are saved to the output/ directory with timestamps:
output/tech_demo_20241011_143022.ipynb
Open with Jupyter or VS Code:
jupyter notebook output/tech_demo_*.ipynb
# or
code output/tech_demo_*.ipynbwandb_test/
βββ src/
β βββ agents/ # Agent implementations
β β βββ parser.py # Tech stack parser
β β βββ discovery.py # Documentation discovery
β β βββ judge.py # LLM-as-a-Judge evaluator
β β βββ notebook_generator.py # Notebook creator
β βββ models/ # Data models
β β βββ state.py # Shared agent state
β βββ services/ # Service layer
β β βββ llm_service.py # LLM integration
β β βββ mcp_service.py # MCP server integration
β βββ utils/ # Utilities
β β βββ config.py # Configuration management
β β βββ cli.py # CLI utilities
β βββ workflow.py # ADK workflow orchestration
β βββ main.py # CLI entry point
βββ output/ # Generated notebooks
βββ run.py # Convenience runner
βββ requirements.txt # Dependencies
βββ .env.example # Environment template
βββ README.md # This file
- AgentState: Shared state across workflow stages
- Technology: Individual technology representation
- TechnologyList: Structured output from parser
- DiscoveryResult: Documentation discovery results
- JudgementResult: LLM judge evaluation
- NotebookMetadata: Generated notebook metadata
All agents follow the ReAct pattern where applicable:
- Parser Agent: Analyzes natural language β structured technologies
- Discovery Agent: Parallel MCP queries for documentation
- Judge Agent: Evaluates data quality and sufficiency
- Notebook Generator: Synthesizes notebook from discoveries
-
LLMService: Unified interface for Anthropic/Google LLMs
- Text generation
- Structured output (JSON mode)
- Context-aware generation
-
MCPService: Interface to MCP servers
- Context7: Library documentation
- DeepWiki: GitHub repository docs
- Tavily: Web search
The ADK workflow orchestrates all agents with:
- Sequential execution with state passing
- Retry loops for discovery
- User interaction points
- Error handling
Uses Pydantic models for type-safe, validated outputs from LLMs:
result = llm.generate_structured(
prompt=prompt,
response_model=TechnologyList,
system_prompt=system_prompt
)Discovers from multiple MCP servers simultaneously:
discoveries = await mcp_service.discover_all(
technology="React",
topic="hooks",
repo_name="facebook/react"
)Automatically retries discovery if data is insufficient:
while not sufficient_data and iterations < max_iterations:
discover()
judge()- MCP integration is structured but needs actual MCP SDK calls
- User feedback in approval step doesn't trigger re-parsing
- Limited error recovery in generation
- No notebook execution validation
- Full MCP SDK integration
- Iterative refinement based on user feedback
- Notebook execution and testing
- Multi-turn conversation for clarification
- Support for more MCP servers
- Template-based notebook generation
- Version control integration
- Export to multiple formats (HTML, PDF)
Ensure your .env file has the correct API key:
ANTHROPIC_API_KEY=sk-ant-...- Check MCP servers are configured in
~/.cursor/mcp.json - Verify MCP servers are enabled in
.env - Check network connectivity
- Try with a simpler tech stack
- Check LLM token limits
- Review error messages in console
This is an MVP implementation. Contributions welcome for:
- Full MCP SDK integration
- Additional MCP servers
- Enhanced notebook templates
- Better error handling
- Test coverage
MIT License - see LICENSE file for details
- Built with Anthropic ADK
- Uses Context7 MCP
- Integrates DeepWiki MCP
- Rich CLI via Rich