Grasper is a production-ready AI data analysis system built with PydanticAI, Google Gemini 2.0 Flash, and functional programming principles. It provides a complete workflow from questions to executable code with memory management, error recovery, and real-time monitoring.
- Questions → Tasks: Breaks down complex questions into executable tasks
- Code Generation: Uses Google Gemini 2.0 Flash via PydanticAI to generate Python code
- Execution: Safely executes code using subprocess with timeout and error handling
- Error Recovery: Automatically detects and fixes code errors in a loop until successful
- Memory Management: Maintains workflow state and results throughout the process
- Real-time Monitoring: Logfire integration for observability and debugging
- Concurrent Processing: Parallel task execution with throttling and rate limiting
- Type Safety: Full type hints with Pydantic models
- Error Resilience: Comprehensive error handling and recovery mechanisms
# Install dependencies with uv
uv sync
# Create environment file from template
cp .env.example .envEdit .env file:
# Required: Google API Key for Gemini
GOOGLE_API_KEY=your-google-api-key-here
# Optional: Logfire for monitoring
LOGFIRE_TOKEN=your-logfire-token
# Optional: Environment settings
ENVIRONMENT=development
DEBUG=trueGet Google API Key:
- Go to Google AI Studio
- Create a new API key
- Add it to your
.envfile
# Development server with hot reload
uv run uvicorn api.complete_api:app --host 0.0.0.0 --port 8000 --reloadAdd your data analyst input to data_analyst_input.txt, then:
curl "https://app.example.com/api/" -F "data_analyst_input.txt=@data_analyst_input.txt" -F "image.png=@image.png" -F "data.csv=@data.csv"
Data Analyst Input → Task Breakdown → Code Generation → Execution → Error Recovery → Results
Core Components:
agents/: AI agents using PydanticAI with Google Geminicore/: Workflow engine, functional utilities, type definitionsapi/: FastAPI endpoints with real-time monitoringmonitoring/: Logfire integration and observability
- Logfire Integration: Real-time logs, traces, and metrics
- Performance Metrics: Request rates, response times, error rates
- AI Analytics: Token usage, model performance, success rates
Access metrics at: http://localhost:8000/metrics
Built with ❤️ using PydanticAI