Skip to content

freepik-company/knowledge-agent

Repository files navigation

Knowledge Agent

Go Version License Release Docker

Intelligent knowledge management system for teams. Built with Go, Claude Sonnet 4.5, PostgreSQL+pgvector, Redis, and Slack.

✨ Key Differentiator: Unlike traditional chatbots, Knowledge Agent intelligently decides when to search, save, or just respond - no commands needed.

Features

Core Capabilities

  • 🎭 Personalizable: Give your agent a custom name (Anton, Ghost, Cortex, etc.) via config
  • 🧠 Auto-learning: Analyzes conversations and saves valuable information without explicit commands
  • 🔍 Semantic Search: Find past conversations using natural language (pgvector-powered)
  • 🖼️ Image Analysis: Understands technical diagrams, error screenshots, architecture diagrams
  • 🌐 URL Fetching: Analyzes documentation and web content automatically
  • 🌍 Multilingual: Responds in Spanish, English, or any language the user writes in
  • 📅 Temporal Context: Automatically adds dates to memories ("esta semana" → actual date)

Security & Control

  • 🔐 Permission Control: Restrict who can save to memory (by Slack user or service)
  • 🔑 A2A Authentication: API key-based auth for external agents
  • 🛡️ Two-tier Auth: Internal (Slack Bridge) + External (A2A) authentication

Observability & Integration

  • 📊 LLM Observability: Track costs, tokens, and performance with Langfuse integration
  • 🔌 MCP Support: Extend with Model Context Protocol servers (filesystem, GitHub, etc.)
  • 🐳 Production-Ready: Docker Compose, Kubernetes/Helm support, auto-migrations

Advanced Features

  • 🧹 Response Cleaner: Automatically removes internal narration from responses using Claude Haiku
  • 📦 Context Summarizer: Compresses long conversation threads to prevent context overflow
  • Async Sub-Agents: Launch long-running agent tasks (5-15 min) without blocking

Quick Start

Docker Compose (Recommended)

# 1. Clone repository
git clone https://github.com/freepik-company/knowledge-agent.git
cd knowledge-agent

# 2. Configure
cp config-example.yaml config.yaml
# Edit config.yaml:
#   - Add your API keys (Anthropic, Slack, etc.)
#   - Personalize agent_name (optional)

# 3. Start full stack (Postgres, Redis, Ollama, Agent)
make docker-stack

# 4. Pull embedding model (first time only)
docker exec knowledge-agent-ollama ollama pull nomic-embed-text

# 5. Agent is ready!
# Access via Slack or API at http://localhost:8081

Local Development

# 1. Start infrastructure only
make docker-up

# 2. Configure (copy and edit config)
cp config-example.yaml config.yaml
# Edit config.yaml with your API keys

# 3. Run agent locally
make dev

# Socket Mode (no ngrok needed) is default
# For Webhook Mode, see docs/CONFIGURATION.md

Prerequisites:

  • Go 1.24+
  • Docker & Docker Compose
  • Slack workspace with bot configured (Setup Guide)

Usage

@bot how do we deploy?                    # Ask questions
@bot remember deployments are on Tuesdays # Save information
[upload diagram] @bot this is our arch    # Analyze images
@bot check this doc https://...           # Fetch URLs

Architecture

                         knowledge-agent
                               │
         ┌─────────────────────┴─────────────────────┐
         │                                           │
    Port 8080                                   Port 8081
  (Slack Bridge)                            (Agent Server)
         │                                           │
    Slack Events                            /agent/run (auth, ADK)
    Socket/Webhook                         /agent/run_sse (SSE, ADK)
                                           /a2a/invoke (auth)
                                           /.well-known/agent-card.json
                                           /health, /metrics
         │                                           │
         └─────────────────────┬─────────────────────┘
                               │
                         ADK LLM Agent
                      (Claude + SubAgents)
                               │
                    ┌──────────┴──────────┐
                    │                     │
              PostgreSQL+pgvector      Redis
                (memories)           (sessions)

Unified Server Design:

  • Port 8080: Slack Bridge (events, webhooks)
  • Port 8081: Agent Server with ADK REST handler, authentication, rate limiting, and A2A protocol

Commands

Development

make dev              # Run both agent and slack bridge locally
make dev-agent        # Run only agent (no Slack)
make dev-slack        # Run only Slack bridge
make test             # Run tests
make build            # Build binaries

Docker

make docker-stack     # Start full stack (recommended)
make docker-up        # Start infrastructure only
make docker-down      # Stop infrastructure
make docker-rebuild   # Rebuild and restart agent
make docker-health    # Check service health

Database

make db-shell         # PostgreSQL shell
make redis-shell      # Redis shell

Configuration

Agent Personalization

Give your agent a custom identity:

# config.yaml
agent_name: Anton  # Your team's unique name for the agent

Examples: Anton, Ghost, Cortex, Sage, Echo, Lore

The agent will introduce itself with this name and your team can build a unique identity around it.

Slack Modes

Socket Mode (development):

SLACK_MODE=socket
SLACK_APP_TOKEN=xapp-...

Webhook Mode (production):

SLACK_MODE=webhook
SLACK_SIGNING_SECRET=...

See deployments/.env.example for all environment options, or config-example.yaml for YAML configuration.

Observability (Optional)

Track LLM costs, performance, and usage with Langfuse:

# config.yaml
langfuse:
  enabled: true
  public_key: ${LANGFUSE_PUBLIC_KEY}
  secret_key: ${LANGFUSE_SECRET_KEY}
  host: https://cloud.langfuse.com
  input_cost_per_1m: 3.0    # Claude Sonnet 4.5 pricing
  output_cost_per_1m: 15.0

What you get:

  • ✅ Token usage and cost tracking per query
  • ✅ LLM generations with full prompt/response
  • ✅ Tool call tracing (search_memory, save_to_memory, fetch_url)
  • ✅ Per-user cost analytics
  • ✅ Performance monitoring and debugging

SDK: Uses github.com/git-hulk/langfuse-go (community-maintained, feature-complete)

See docs/OPERATIONS.md for complete guide.

Security & Authentication

The Knowledge Agent implements a two-tier authentication model:

Internal Authentication (Slack Bridge ↔ Agent)

Shared secret token for secure communication between slack-bot and agent services.

# Generate token
INTERNAL_AUTH_TOKEN=$(openssl rand -hex 32)

# Set in both services
export INTERNAL_AUTH_TOKEN=<your-token>

External A2A Authentication (External Services → Agent)

API key authentication for direct API access from external services.

# Knowledge Agent .env
# Format: {"secret_key":{"caller_id":"name","role":"write|read"}}
API_KEYS='{"ka_secret_abc123":{"caller_id":"root-agent","role":"write"},"ka_secret_def456":{"caller_id":"external-service","role":"read"}}'

# Legacy format (assumes role="write"):
# API_KEYS='{"ka_secret_abc123":"root-agent","ka_secret_def456":"external-service"}'

JWT Bearer Authentication (API Gateway)

Requests authenticated by an upstream API Gateway can pass a JWT token. Email and groups are extracted for permission checks.

curl -X POST http://localhost:8081/agent/run \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIs..." \
  -d '{"appName":"knowledge-agent","userId":"test","newMessage":{"role":"user","parts":[{"text":"How do we deploy?"}]}}'

Authentication Modes:

  • Production (recommended): Set both INTERNAL_AUTH_TOKEN and API_KEYS
  • Development (open access): Leave both empty

Roles:

  • write: Full access (can use all tools including save_to_memory)
  • read: Read-only (cannot use save_to_memory)

Example A2A Usage:

# External service accessing agent (use the API key secret)
curl -X POST http://localhost:8081/agent/run \
  -H "Content-Type: application/json" \
  -H "X-API-Key: ka_secret_abc123" \
  -d '{"appName":"knowledge-agent","userId":"external","newMessage":{"role":"user","parts":[{"text":"How do we deploy?"}]}}'

See docs/A2A_TOOLS.md for complete integration guide and docs/SECURITY_GUIDE.md for detailed security configuration.

A2A Protocol (Agent-to-Agent)

Knowledge Agent exposes the standard A2A (Agent-to-Agent) protocol on the unified server (port 8081):

# config.yaml
a2a:
  enabled: true
  agent_url: http://knowledge-agent:8081  # For agent card discovery

Endpoints (all on port 8081):

  • POST /a2a/invoke - A2A protocol invocation (authenticated)
  • GET /.well-known/agent-card.json - Agent discovery (public)
  • ✅ Compatible with other ADK agents (metrics-agent, logs-agent, etc.)

Sub-agents - Call other ADK agents as sub-agents:

a2a:
  enabled: true
  sub_agents:
    - name: metrics_agent
      description: "Query Prometheus metrics"
      endpoint: http://metrics-agent:9000

See docs/A2A_TOOLS.md for complete A2A integration guide.

MCP Integration (Model Context Protocol)

Extend the agent with MCP servers for filesystem, GitHub, databases, and more.

# config.yaml
mcp:
  enabled: true
  servers:
    - name: "filesystem"
      transport_type: "command"
      command:
        path: "npx"
        args: ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"]

Automatic npm Package Installation:

  • Docker: Packages auto-detected from config.yaml and installed at startup
  • Local: npm install -g @modelcontextprotocol/server-filesystem

See docs/MCP_INTEGRATION.md for complete guide and examples.

Documentation

Getting Started

  • 📖 Usage Guide - End-user guide for interacting with the agent
  • ⚙️ Configuration - Complete configuration reference
  • 🚀 Deployment - Docker, Kubernetes, and production setup

Advanced

Development

Contributing

We welcome contributions! Please see our Contributing Guide (coming soon).

Key areas for contribution:

  • 🐛 Bug fixes and improvements
  • 📚 Documentation enhancements
  • 🔌 New MCP server integrations
  • 🌍 Translations
  • ✨ Feature requests (open an issue first)

Support

License

MIT License - Feel free to use in your projects!

Acknowledgments

Built with:


Made with ❤️ by the Freepik Technology Team

About

AI knowledge management agent with auto-learning and semantic search. Integrates with Slack or use A2A API for multi-agent systems. Built with Anthropic Claude, Go, and pgvector. Automatically captures institutional knowledge. Customizable, multilingual, production-ready.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors