A minimalistic agentic framework for building AI agents with tool calling capabilities. Why do we need another agentic framework? I felt that all other frameworks were too complex and had too many dependencies, I wanted to build a framework that was easy to understand and use, and that was also easy to extend. The main goal here isn't on production scenarios (yet), but on understanding and prototyping agentic systems. Therefore, the name "AgentWerkstatt" is a play on the term Agent and the German word "Werkstatt" (workshop).
AgentWerkstatt is a lightweight, extensible framework for creating AI agents. It is powered by Claude (Anthropic), but it is highly extensible and can be used with other LLMs. It features a modular architecture with pluggable LLM providers and tools, making it easy to build conversational agents with access to external capabilities like web search. More LLMs will be supported in the future.
- ๐ง Modular LLM Support - Built with extensible LLM abstraction (currently supports Claude)
- ๐ง Tool System - Pluggable tool architecture with automatic tool discovery
- ๐ฌ Conversation Management - Built-in conversation history and context management
- ๐งฎ Persistent Memory - Optional mem0 integration for long-term memory and context retention
- ๐ Web Search - Integrated Tavily API for real-time web information retrieval
- ๐ Observability - Optional Langfuse integration for comprehensive tracing and analytics
- ๐ฅ๏ธ CLI Interface - Ready-to-use command-line interface
- ๐ณ 3rd Party Services - Docker Compose stack with PostgreSQL, Neo4j, and other services
- โก Lightweight - Minimal dependencies and clean architecture
- Python 3.10 or higher
- An Anthropic API key for Claude
- (Optional) A Tavily API key for web search
- (Optional) An OpenAI API key for mem0 memory system
-
Clone the repository:
git clone https://github.com/hanneshapke/agentwerkstatt.git cd agentwerkstatt -
Install dependencies:
# Basic installation uv sync # With optional features uv sync --extra tracing # Langfuse tracing support uv sync --extra memory # mem0 memory support uv sync --all-extras # All optional features
-
Set up environment variables:
# Create a .env file echo "ANTHROPIC_API_KEY=your_anthropic_api_key_here" >> .env echo "TAVILY_API_KEY=your_tavily_api_key_here" >> .env # Optional for web search echo "OPENAI_API_KEY=your_openai_api_key_here" >> .env # Optional for mem0 memory
AgentWerkstatt includes a Docker Compose stack with integrated services:
- mem0 - AI memory management system for persistent context
- Langfuse - Observability and tracing platform
- PostgreSQL - Database with pgvector for embeddings
- Neo4j - Graph database for memory relationships
- Redis - Caching and session storage
- MinIO - S3-compatible object storage
To start the services:
# Start all services
docker compose -f third_party/docker-compose.yaml up -d
# Or start specific services
docker compose -f third_party/docker-compose.yaml up -d mem0 neo4j postgresFor detailed setup instructions, see:
- MEM0_SETUP.md - Memory system setup
- LANGFUSE_SETUP.md - Observability setup
- LANGFUSE_INTEGRATION.md - Integration guide
- Sign up at console.anthropic.com
- Generate an API key
- Add it to your
.envfile asANTHROPIC_API_KEY
- Sign up at app.tavily.com
- Get your API key (1,000 free searches/month)
- Add it to your
.envfile asTAVILY_API_KEY
- Sign up at platform.openai.com
- Generate an API key
- Add it to your
.envfile asOPENAI_API_KEY
Run the interactive CLI:
# Using default configuration (config.yaml)
agentwerkstatt
# Using a custom configuration file
agentwerkstatt --config /path/to/your/config.yamlExample conversation:
๐ค AgentWerkstatt
==================================================
Loading config from: config.yaml
I'm an example AgentWerkstatt assistant with web search capabilities!
Ask me to search the web for information.
Commands: 'quit'/'exit' to quit, 'clear' to reset, 'status' to check conversation state.
You: What's the latest news about AI developments?
๐ค Agent is thinking...
๐ค Agent: I'll search for the latest AI developments for you.
[Search results and AI summary will be displayed here]
You: clear # Clears conversation history
๐งน Conversation history cleared!
You: quit
๐ Goodbye!
from agentwerkstatt import Agent, AgentConfig
# Initialize with default config
config = AgentConfig.from_yaml("config.yaml")
agent = Agent(config)
# Or customize the configuration
config = AgentConfig(
llm={"provider": "claude", "model": "claude-sonnet-4-20250514"},
tools_dir="./tools",
verbose=True,
)
agent = Agent(config)
# Process a request
response = agent.process_request("Search for recent Python releases")
print(response)
# Clear conversation history
agent.llm.clear_history()The CLI supports the following command line arguments:
--config- Path to the agent configuration file (default:config.yaml)--help- Show help message and available options
Examples:
# Use default configuration
agentwerkstatt
# Use custom configuration file
agentwerkstatt --config my_custom_config.yaml
# Show help
agentwerkstatt --helpFor comprehensive documentation, please visit our documentation directory:
- ๐ Complete Documentation - Main documentation hub
- ๐ Getting Started - Installation and quick start guide
- ๐๏ธ Architecture - Framework design and components
- โ๏ธ Configuration - Environment setup and configuration options
- ๐ ๏ธ Development Guide - Contributing and extending the framework
- ๐ API Reference - Detailed API documentation
Basic configuration in config.yaml:
# LLM Model Configuration
llm:
provider: "claude"
model: "claude-sonnet-4-20250514"
# Tools Configuration
tools_dir: "./tools"
# Logging Configuration
verbose: true
# Persona Configuration
personas:
- id: databot
name: "DataBot"
description: "A persona for data analysis and visualization."
file: "personas/databot.md"
default_persona: "databot"Environment Variables:
ANTHROPIC_API_KEY- Required for Claude API accessTAVILY_API_KEY- Optional, for web search functionality
For complete configuration options, see the Configuration Guide.
Check out our ROADMAP.md to see what's planned for future releases, including:
- ๐ง Multi-LLM Support - OpenAI, Google AI, and local model integration
- โ Memory & Persistence - mem0 integration (โ COMPLETED)
- โ 3rd Party Integrations - Observability tools and database services (โ COMPLETED)
- ๐ ๏ธ Advanced Tools - API discovery, file operations, and code execution
- ๐ค Agent Intelligence - Self-reflection, planning, and reasoning capabilities
We welcome feedback and contributions to help shape the project's direction!
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Run the quality checks:
uv run ruff check --fix uv run ruff format uv run mypy . uv run pytest - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
See CONTRIBUTING.md for detailed guidelines.
The license is still under development.
- Anthropic for the Claude API
- Tavily for web search capabilities
- mem0 for AI memory management
- Langfuse for observability and tracing
- The open-source community for inspiration and tools
- ๐ Documentation
- ๐ Bug Reports
- ๐ฌ Discussions
- ๐ง MEM0 Setup Guide
- ๐ Langfuse Integration Guide
AgentWerkstatt - Building intelligent agents, one tool at a time. ๐
