Intelligent knowledge management system for teams. Built with Go, Claude Sonnet 4.5, PostgreSQL+pgvector, Redis, and Slack.
✨ Key Differentiator: Unlike traditional chatbots, Knowledge Agent intelligently decides when to search, save, or just respond - no commands needed.
- 🎭 Personalizable: Give your agent a custom name (Anton, Ghost, Cortex, etc.) via config
- 🧠 Auto-learning: Analyzes conversations and saves valuable information without explicit commands
- 🔍 Semantic Search: Find past conversations using natural language (pgvector-powered)
- 🖼️ Image Analysis: Understands technical diagrams, error screenshots, architecture diagrams
- 🌐 URL Fetching: Analyzes documentation and web content automatically
- 🌍 Multilingual: Responds in Spanish, English, or any language the user writes in
- 📅 Temporal Context: Automatically adds dates to memories ("esta semana" → actual date)
- 🔐 Permission Control: Restrict who can save to memory (by Slack user or service)
- 🔑 A2A Authentication: API key-based auth for external agents
- 🛡️ Two-tier Auth: Internal (Slack Bridge) + External (A2A) authentication
- 📊 LLM Observability: Track costs, tokens, and performance with Langfuse integration
- 🔌 MCP Support: Extend with Model Context Protocol servers (filesystem, GitHub, etc.)
- 🐳 Production-Ready: Docker Compose, Kubernetes/Helm support, auto-migrations
- 🧹 Response Cleaner: Automatically removes internal narration from responses using Claude Haiku
- 📦 Context Summarizer: Compresses long conversation threads to prevent context overflow
- ⏳ Async Sub-Agents: Launch long-running agent tasks (5-15 min) without blocking
# 1. Clone repository
git clone https://github.com/freepik-company/knowledge-agent.git
cd knowledge-agent
# 2. Configure
cp config-example.yaml config.yaml
# Edit config.yaml:
# - Add your API keys (Anthropic, Slack, etc.)
# - Personalize agent_name (optional)
# 3. Start full stack (Postgres, Redis, Ollama, Agent)
make docker-stack
# 4. Pull embedding model (first time only)
docker exec knowledge-agent-ollama ollama pull nomic-embed-text
# 5. Agent is ready!
# Access via Slack or API at http://localhost:8081# 1. Start infrastructure only
make docker-up
# 2. Configure (copy and edit config)
cp config-example.yaml config.yaml
# Edit config.yaml with your API keys
# 3. Run agent locally
make dev
# Socket Mode (no ngrok needed) is default
# For Webhook Mode, see docs/CONFIGURATION.mdPrerequisites:
- Go 1.24+
- Docker & Docker Compose
- Slack workspace with bot configured (Setup Guide)
@bot how do we deploy? # Ask questions
@bot remember deployments are on Tuesdays # Save information
[upload diagram] @bot this is our arch # Analyze images
@bot check this doc https://... # Fetch URLs
knowledge-agent
│
┌─────────────────────┴─────────────────────┐
│ │
Port 8080 Port 8081
(Slack Bridge) (Agent Server)
│ │
Slack Events /agent/run (auth, ADK)
Socket/Webhook /agent/run_sse (SSE, ADK)
/a2a/invoke (auth)
/.well-known/agent-card.json
/health, /metrics
│ │
└─────────────────────┬─────────────────────┘
│
ADK LLM Agent
(Claude + SubAgents)
│
┌──────────┴──────────┐
│ │
PostgreSQL+pgvector Redis
(memories) (sessions)
Unified Server Design:
- Port 8080: Slack Bridge (events, webhooks)
- Port 8081: Agent Server with ADK REST handler, authentication, rate limiting, and A2A protocol
make dev # Run both agent and slack bridge locally
make dev-agent # Run only agent (no Slack)
make dev-slack # Run only Slack bridge
make test # Run tests
make build # Build binariesmake docker-stack # Start full stack (recommended)
make docker-up # Start infrastructure only
make docker-down # Stop infrastructure
make docker-rebuild # Rebuild and restart agent
make docker-health # Check service healthmake db-shell # PostgreSQL shell
make redis-shell # Redis shellGive your agent a custom identity:
# config.yaml
agent_name: Anton # Your team's unique name for the agentExamples: Anton, Ghost, Cortex, Sage, Echo, Lore
The agent will introduce itself with this name and your team can build a unique identity around it.
Socket Mode (development):
SLACK_MODE=socket
SLACK_APP_TOKEN=xapp-...
Webhook Mode (production):
SLACK_MODE=webhook
SLACK_SIGNING_SECRET=...
See deployments/.env.example for all environment options, or config-example.yaml for YAML configuration.
Track LLM costs, performance, and usage with Langfuse:
# config.yaml
langfuse:
enabled: true
public_key: ${LANGFUSE_PUBLIC_KEY}
secret_key: ${LANGFUSE_SECRET_KEY}
host: https://cloud.langfuse.com
input_cost_per_1m: 3.0 # Claude Sonnet 4.5 pricing
output_cost_per_1m: 15.0What you get:
- ✅ Token usage and cost tracking per query
- ✅ LLM generations with full prompt/response
- ✅ Tool call tracing (search_memory, save_to_memory, fetch_url)
- ✅ Per-user cost analytics
- ✅ Performance monitoring and debugging
SDK: Uses github.com/git-hulk/langfuse-go (community-maintained, feature-complete)
See docs/OPERATIONS.md for complete guide.
The Knowledge Agent implements a two-tier authentication model:
Shared secret token for secure communication between slack-bot and agent services.
# Generate token
INTERNAL_AUTH_TOKEN=$(openssl rand -hex 32)
# Set in both services
export INTERNAL_AUTH_TOKEN=<your-token>API key authentication for direct API access from external services.
# Knowledge Agent .env
# Format: {"secret_key":{"caller_id":"name","role":"write|read"}}
API_KEYS='{"ka_secret_abc123":{"caller_id":"root-agent","role":"write"},"ka_secret_def456":{"caller_id":"external-service","role":"read"}}'
# Legacy format (assumes role="write"):
# API_KEYS='{"ka_secret_abc123":"root-agent","ka_secret_def456":"external-service"}'Requests authenticated by an upstream API Gateway can pass a JWT token. Email and groups are extracted for permission checks.
curl -X POST http://localhost:8081/agent/run \
-H "Content-Type: application/json" \
-H "Authorization: Bearer eyJhbGciOiJSUzI1NiIs..." \
-d '{"appName":"knowledge-agent","userId":"test","newMessage":{"role":"user","parts":[{"text":"How do we deploy?"}]}}'Authentication Modes:
- Production (recommended): Set both
INTERNAL_AUTH_TOKENandAPI_KEYS - Development (open access): Leave both empty
Roles:
write: Full access (can use all tools including save_to_memory)read: Read-only (cannot use save_to_memory)
Example A2A Usage:
# External service accessing agent (use the API key secret)
curl -X POST http://localhost:8081/agent/run \
-H "Content-Type: application/json" \
-H "X-API-Key: ka_secret_abc123" \
-d '{"appName":"knowledge-agent","userId":"external","newMessage":{"role":"user","parts":[{"text":"How do we deploy?"}]}}'See docs/A2A_TOOLS.md for complete integration guide and docs/SECURITY_GUIDE.md for detailed security configuration.
Knowledge Agent exposes the standard A2A (Agent-to-Agent) protocol on the unified server (port 8081):
# config.yaml
a2a:
enabled: true
agent_url: http://knowledge-agent:8081 # For agent card discoveryEndpoints (all on port 8081):
- ✅
POST /a2a/invoke- A2A protocol invocation (authenticated) - ✅
GET /.well-known/agent-card.json- Agent discovery (public) - ✅ Compatible with other ADK agents (metrics-agent, logs-agent, etc.)
Sub-agents - Call other ADK agents as sub-agents:
a2a:
enabled: true
sub_agents:
- name: metrics_agent
description: "Query Prometheus metrics"
endpoint: http://metrics-agent:9000See docs/A2A_TOOLS.md for complete A2A integration guide.
Extend the agent with MCP servers for filesystem, GitHub, databases, and more.
# config.yaml
mcp:
enabled: true
servers:
- name: "filesystem"
transport_type: "command"
command:
path: "npx"
args: ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"]Automatic npm Package Installation:
- Docker: Packages auto-detected from config.yaml and installed at startup
- Local:
npm install -g @modelcontextprotocol/server-filesystem
See docs/MCP_INTEGRATION.md for complete guide and examples.
- 📖 Usage Guide - End-user guide for interacting with the agent
- ⚙️ Configuration - Complete configuration reference
- 🚀 Deployment - Docker, Kubernetes, and production setup
- 🔌 MCP Integration - Extend with external data sources
- 🔐 Security - Authentication and permissions
- 🤖 A2A Integration - Agent-to-Agent API integration
- 📊 Observability - Langfuse integration and monitoring
- 📈 Prometheus Metrics - Prometheus metrics and ServiceMonitors
- 🛠️ Claude Code Guide - Development with Claude Code
- 🗄️ Production PostgreSQL - pgvector setup for cloud providers
- 📝 Implementation Summary - Architecture overview
We welcome contributions! Please see our Contributing Guide (coming soon).
Key areas for contribution:
- 🐛 Bug fixes and improvements
- 📚 Documentation enhancements
- 🔌 New MCP server integrations
- 🌍 Translations
- ✨ Feature requests (open an issue first)
- 🐛 Report a Bug
- 💡 Request a Feature
- 💬 Discussions
- 📧 Email: [sre@freepik.com]
MIT License - Feel free to use in your projects!
Built with:
- Google ADK - Agent Development Kit
- ADK Utils Go - ADK Utils Library in Go made by @achetronic
- Anthropic Claude - LLM provider
- pgvector - Vector similarity search
- Model Context Protocol - Tool integration standard
Made with ❤️ by the Freepik Technology Team