Pal is a personal context-agent that learns how you work.
It navigates a set of heterogeneous sources to gather context:
- A local file system with preferences, voice guidelines, and templates.
- Tools like Gmail, Google Calendar, and Slack.
- A PostgreSQL database for structured data (notes, people, projects, decisions).
Each source keeps its native query interface. Databases get queried with SQL. Email gets queried by sender and date. Files get navigated by directory structure. A learning loop ties it together: every interaction improves the next one.
What makes Pal, a context agent, different is the execution loop, designed for routing and navigation:
- Classify intent from the input message.
- Recall metadata and routing patterns from knowledge and learnings.
- Read from the right sources, in the order informed by learnings.
- Act through tool calls.
- Learn so the next request is better.
Built with Agno.
# Clone the repo
git clone https://github.com/agno-agi/pal
cd pal
# Add OPENAI_API_KEY
cp example.env .env
# Edit .env and add your key
# Start the application
docker compose up -d --build
# Load context metadata into the knowledge base
docker compose exec pal-api python context/load_context.py
# Optional: preview what will be loaded without writing
docker compose exec pal-api python context/load_context.py --dry-runConfirm Pal is running at http://localhost:8000/docs.
- Open os.agno.com and login
- Add OS → Local →
http://localhost:8000 - Click "Connect"
Pal starts with SQL + Context Files + Exa. Gmail, Google Calendar, and Slack are pre-wired and activate when you add the relevant configuration.
Gmail + Google Calendar
Google auth is generally a pain, but you only need to do these steps once. The goal is to get three values: GOOGLE_CLIENT_ID, GOOGLE_CLIENT_SECRET, and GOOGLE_PROJECT_ID.
- Go to console.cloud.google.com
- Click the project dropdown (top-left) → New Project
- Give the project a name (e.g.
agents) and click Create - Copy the Project ID from the project dashboard and save it as
GOOGLE_PROJECT_IDin your.env
- Go to APIs & Services → Library
- Search for and enable Gmail API
- Search for and enable Google Calendar API
- Go to APIs & Services → OAuth consent screen
- Click Get started (this opens the Google Auth Platform wizard)
- App Information: Enter an app name (e.g.
pal) and your support email, click Next - Audience: Select External, click Next
- Contact Information: Enter your email, click Next
- Finish: Click Create
- In the left sidebar, go to Audience and add your Google email as a test user
- Go to APIs & Services → Credentials
- Click Create Credentials → OAuth client ID
- Application type: Desktop app
- Name it (e.g.
pal-desktop) and click Create - Copy Client ID →
GOOGLE_CLIENT_ID - Copy Client secret →
GOOGLE_CLIENT_SECRET
GOOGLE_CLIENT_ID="your-google-client-id"
GOOGLE_CLIENT_SECRET="your-google-client-secret"
GOOGLE_PROJECT_ID="your-google-project-id"Run the OAuth script on your local machine:
set -a; source .env; set +a
python scripts/google_auth.pyThis opens a browser for Google consent and saves token.json to the project root. The script uses prompt='consent' to ensure a refresh token is always returned, even on re-authorization.
docker compose up -d --buildGmail + Google Calendar are now configured. A few things to know:
- Gmail is draft-only. Send tools are disabled at the code level. Thread reading, draft lifecycle (create, list, update), and label management are all enabled.
- Calendar events with external attendees require user confirmation before creation.
Slack
Slack gives Pal two capabilities: receiving messages from users in Slack threads, and proactively posting to channels (e.g. scheduled task results to #pal-updates).
- Go to api.slack.com/apps and click Create New App → From scratch
- Name it (e.g.
Pal) and select your workspace
- Go to OAuth & Permissions in the sidebar
- Under Bot Token Scopes, add:
app_mentions:read— respond when mentionedchat:write— post messageschat:write.public— post to public channelsim:history— read DM historyim:read— view DMsim:write— send DMschannels:read— list public channels
Adding scopes for slack bots are excruciatingly painful.
- Click Install to Workspace at the top of the OAuth & Permissions page
- Authorize the requested permissions
- Copy the Bot User OAuth Token (
xoxb-...) →SLACK_TOKENin the .env file.
- Go to Basic Information in the sidebar
- Under App Credentials, copy Signing Secret →
SLACK_SIGNING_SECRET
Slack needs a public URL to send events to Pal. In production, you'll use your deployed URL but for local development, use ngrok:
-
Install and configure ngrok
-
Start an endpoint at localhost:8000, where you local AgentOS is running via docker.
ngrok http 8000Copy the https:// URL that ngrok provides (e.g. https://abc123.ngrok-free.app).
- Go to Event Subscriptions in the sidebar and toggle Enable Events
- Set the Request URL to:
your-ngrok-url+/slack/events. So something like:https://your-ngrok-url.ngrok-free.dev/slack/events - Wait for Slack to verify the endpoint (Pal must be running)
- Under Subscribe to bot events, add:
app_mentionmessage.immessage.channelsmessage.groups
- Click Save Changes
- Go to App Home in the sidebar
- Under Show Tabs, enable Messages Tab
- Check Allow users to send Slash commands and messages from the messages tab
SLACK_TOKEN="xoxb-your-bot-token"
SLACK_SIGNING_SECRET="your-signing-secret"docker compose up -d --buildAfter changing scopes or event subscriptions, go to Install App and click Reinstall to Workspace to apply the new permissions.
Thread timestamps map to session IDs, so each Slack thread gets its own conversation context.
Exa Web Research
Available by default as it's free via their MCP server. Optionally add an API key for authenticated access:
EXA_API_KEY=your-exa-keyWhen you point Claude Code at a codebase, it navigates. It reads the directory structure, follows imports, checks dependencies, builds a map of where things live. It gets more accurate the more it explores.
Pal applies this pattern to personal and work data. Email is queried by sender and date. A database is queried with SQL. A calendar is queried with time ranges. Files are navigated by structure. Each source is queried on its own terms, and a learning loop improves retrieval with every interaction.
The industry has gone through three generations of context engineering:
Generation 1: Semantic RAG (2023). Embed your documents, store in a vector database, search at query time. RAG gave LLMs access to large knowledge bases, and the developer adoption was extraordinary. The limitation: everything gets flattened into one interface. A SQL table should be queried with SQL. A calendar should be queried with time ranges. A file system should be navigated by structure.
Generation 2: Agentic RAG (2024). Improvements in tool calling made agents reliable enough to decide when to search, run multiple retrievals, and act on results. The underlying architecture remains the bottleneck: agents still search a vector store, still flatten sources, still have no memory of what worked last time.
Generation 3: Agentic Navigation (2026). The agent navigates a context graph of heterogeneous sources, each queried on its own terms. It builds a map of where things live, learns which retrieval strategies work, and improves with every interaction. Navigation over search as the core retrieval primitive.
Pal is a Generation 3 context agent.
Every interaction follows the same execution loop:
- Classify intent from the user request.
- Recall source metadata and routing patterns from knowledge and learnings.
- Read from the right sources, in the order informed by learnings.
- Act through tool calls.
- Learn so the next request is better.
Five systems make up Pal's context graph:
-
Knowledge (
pal_knowledge): A metadata index of where things live: file manifests, table schemas, source capabilities, cross-source discoveries. This is a routing layer that tells Pal where to look. In multi-user setups, knowledge is shared across users. -
Learnings (
pal_learnings): Operational memory of what works: which retrieval strategies succeeded, recurring user patterns, and explicit user corrections. Corrections always take priority. Learnings are namespaced per user. -
Files (
context/): User-authored context files read on demand. Voice guidelines, preferences, templates, and references that shape Pal's behavior. Pal also writes back here: meeting notes, exports, generated documents. -
SQL (
pal_*tables): Structured data. Notes, people, projects, and decisions. Pal owns the schema and creates tables on demand. All queries are scoped to the active user, a soft boundary managed by Pal. -
Tools (Gmail, Calendar, Slack, Exa): External systems queried through native interfaces. Email by sender and date. Calendar by time range. Slack by channel and thread. Web by search. Each source is queried on its own terms.
The context directory (PAL_CONTEXT_DIR, default ./context) is Pal's primary document store. Files are searched and read on demand, so edits are immediately reflected without reindexing.
User to Pal: Place voice guidelines, preferences, templates, and references here. Pal reads them to shape its behavior.
Pal to User: Pal writes summaries, exports, and generated documents back here.
context/
├── about-me.md # User background, goals, active projects
├── preferences.md # Working-style config, file conventions, scheduled tasks
├── voice/ # Writing tone guides per channel
│ ├── email.md
│ ├── linkedin-post.md
│ ├── x-post.md
│ ├── slack-message.md
│ └── document.md
├── templates/ # Document scaffolds Pal fills per use
│ ├── meeting-notes.md
│ ├── weekly-review.md
│ └── project-brief.md
├── meetings/ # Saved meeting notes and weekly reviews
└── projects/ # Project briefs and docs
File deletion is disabled at the code level.
Load file metadata to bootstrap the knowledge base:
docker compose exec pal-api python context/load_context.py
docker compose exec pal-api python context/load_context.py --recreate # clear knowledge index and reload
docker compose exec pal-api python context/load_context.py --dry-run # preview without writingThis writes compact File: metadata entries (intent tags, size, path) into pal_knowledge. File contents are still read on demand by FileTools.
Intent classification determines which sources to check and at what depth:
| Intent | Sources | Behavior |
|---|---|---|
capture |
SQL | Insert, confirm, done |
retrieve |
SQL + Files + Knowledge | Query, present results |
connect |
SQL + Files + Gmail + Calendar | Multi-source synthesis |
research |
Exa (+ SQL to save) | Search, summarize, optionally save |
file_read / file_write |
Files | Read or write context directory |
email_read / email_draft |
Gmail + Files (voice) | Search/read or draft |
calendar_read / calendar_write |
Calendar | View schedule or create events |
organize |
SQL | Propose restructuring, execute on confirmation |
meta |
Knowledge + Learnings | Questions about Pal itself |
Requests can have multiple intents. "Draft a reply to Sarah's email about Project X" = email_read + retrieve + email_draft.
Save a note: Met with Sarah Chen from Acme Corp. She's interested in a partnership.
What do I know about Sarah?
Check my latest emails
What's on my calendar this week?
Draft an X post in my voice about AI productivity
Save a summary of today's meeting to meeting-notes.md
What do I know about Project X?
Research web trends on AI productivity
Pal comes with five automated tasks on a cron schedule (all times America/New_York):
| Task | Schedule | Description |
|---|---|---|
| Context Refresh | Daily 8 AM | Re-indexes context files into the knowledge map |
| Daily Briefing | Weekdays 8 AM | Morning briefing — calendar, emails, priorities |
| Inbox Digest | Weekdays 12 PM | Midday email digest (requires Gmail) |
| Learning Summary | Monday 10 AM | Weekly summary of the learning system |
| Weekly Review | Friday 5 PM | End-of-week review draft |
Each task can post its results to Slack (requires SLACK_TOKEN).
AgentOS (app/main.py) [scheduler=True, tracing=True]
├── FastAPI / Uvicorn
├── Slack Interface (optional)
└── Pal Agent (pal/agent.py)
├─ Model: GPT-5.4
├─ SQLTools → PostgreSQL (pal_* tables)
├─ FileTools → context/
├─ MCPTools → Exa web search
├─ update_knowledge → custom tool (pal/tools.py)
├─ SlackTools → Post to Slack channels (requires SLACK_TOKEN)
├─ GmailTools → Gmail (requires Google credentials)
└─ CalendarTools → Google Calendar (requires Google credentials)
Knowledge: pal_knowledge (metadata map — where things are)
Learnings: pal_learnings (retrieval patterns — how to navigate)
| Source | Purpose | Availability |
|---|---|---|
SQL (pal_*) |
Structured notes, people, projects, decisions | Always |
Files (context/) |
Voice guides, templates, preferences, references, exports | Always |
| Exa | Web research | Always (API key optional for auth) |
| Slack | Post messages to channels (e.g. scheduled task results to #pal-updates) |
Requires SLACK_TOKEN |
| Gmail | Search, read, draft, label management | Requires all 3 Google credentials |
| Calendar | Event lookup, creation, updates | Requires all 3 Google credentials |
| Layer | What goes there |
|---|---|
| PostgreSQL | pal_* user tables, pal_knowledge + pal_knowledge_contents, pal_learnings + pal_learnings_contents, pal_contents |
context/ |
Voice guides, preferences, templates, references, generated exports |
| Variable | Required | Default | Purpose |
|---|---|---|---|
OPENAI_API_KEY |
Yes | — | GPT-5.4 |
EXA_API_KEY |
No | "" |
Exa web search auth (tool loads regardless) |
GOOGLE_CLIENT_ID |
No | "" |
Gmail + Calendar OAuth (all 3 required) |
GOOGLE_CLIENT_SECRET |
No | "" |
Gmail + Calendar OAuth (all 3 required) |
GOOGLE_PROJECT_ID |
No | "" |
Gmail + Calendar OAuth (all 3 required) |
PAL_CONTEXT_DIR |
No | ./context |
Context directory path |
SLACK_TOKEN |
No | "" |
Slack bot token (interface + tools) |
SLACK_SIGNING_SECRET |
No | "" |
Slack signing secret (interface only) |
DB_HOST |
No | localhost |
PostgreSQL host |
DB_PORT |
No | 5432 |
PostgreSQL port |
DB_USER |
No | ai |
PostgreSQL user |
DB_PASS |
No | ai |
PostgreSQL password |
DB_DATABASE |
No | ai |
PostgreSQL database |
PORT |
No | 8000 |
API port |
RUNTIME_ENV |
No | prd |
dev enables hot reload |
Context prompts stop making sense: Rerun python context/load_context.py to refresh the knowledge map.
Google token expired: The app defaults to Google's "Testing" mode, which expires tokens every 7 days. Re-run python scripts/google_auth.py to re-authorize. Publishing the app through Google's verification process removes this limit.
Docker config issues: Run docker compose config and verify optional vars have fallback defaults.
PAL_CONTEXT_DIR not found: Ensure the directory is mounted to ./context in your compose file.