Skip to content

chrisguidry/docketeer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

232 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Docketeer

Build the AI personal assistant you need with Docket.

What is docketeer?

Docketeer is a toolkit for building the autonomous AI agent you want without bringing in dozens or hundreds of modules you don't. Instead of a huge and sprawling monolithic system, Docketeer is small, opinionated, and designed to be extended through plugins.

The core of Docketeer is an agentic loop, a Docket for scheduling autonomous work, and a small set of tools for managing memory in its workspace. The inference backend is pluggable β€” bring your own LLM provider. Any other functionality can be added through simple Python plugins that register via standard Python entry points.

Docketeer is currently under heavy early and active development. If you're feeling adventurous, please jump in and send PRs! Otherwise, follow along until things are a little more baked.

The philosophy behind Docketeer's autonomy

Our frontier models don't need much help at all to behave autonomously β€” they just need an execution model to support it. All we're doing here is giving the agent a Docket of its own, on which it can schedule its own future work. As of today, the agent can use a tool to schedule a nudge Docket task to prompt itself at any future time.

The docketeer-autonomy plugin builds on this with recurring reverie and consolidation cycles that give the agent opportunities throughout the day to evaluate the world, reflect on recent events, schedule new tasks, and update its own memory and knowledge base. It also adds journaling, per-person profiles, and room context β€” install it for the full "inner life" experience, or leave it out for a plain chatbot.

Most importantly, the agent can direct itself by updating markdown files in its own workspace. This self-prompting and the ability to self-improve its prompts are the heart of Docketeer's autonomy.

Standards

Yes, Docketeer is developed entirely with AI coding tools. Yes, every line of Docketeer has been reviewed by me, the author. Yes, 100% test coverage is required and enforced.

Security

Obviously, there are inherent risks to running an autonomous agent. Docketeer does not attempt to mitigate those risks. By using only well-aligned and intelligent models, I'm hoping to avoid the most catastrophic outcomes that could come from letting an agent loose on your network. However, the largest risks are still likely to come from nefarious human actors who are eager to target these new types of autonomous AIs.

Docketeer's architecture does not require listening to the network at all. There is no web interface and no API. Docketeer starts up, connects to Redis, connects to the chat system, and only responds to prompts that come from you and the people you've allowed to interact with it via chat or from itself via future scheduled tasks.

Prompt injection will remain a risk with any agent that can reach out to the internet for information.

Architecture

graph TD
    People(["πŸ‘₯ People"])
    People <--> ChatClient

    subgraph chat ["πŸ”Œ docketeer.chat"]
        ChatClient["Rocket.Chat, TUI, ..."]
    end

    ChatClient <--> Brain

    subgraph agent ["Docketeer Agent"]
        Brain["🧠 Brain / agentic loop"]

        subgraph inference ["πŸ”Œ docketeer.inference"]
            API["Anthropic, DeepInfra, ..."]
        end
        Brain <-- "reasoning" --> API
        Brain <-- "memory" --> Workspace["πŸ“‚ Workspace"]
        Brain <-- "scheduling" --> Docket["⏰ Docket"]

        Docket -- triggers --> CoreTasks["nudge"]
        CoreTasks --> Brain

        subgraph prompt ["πŸ”Œ docketeer.prompt"]
            Prompts["agentskills, mcp, ..."]
        end
        Prompts -. system prompt .-> Brain

        Brain -- tool calls --> Registry
        subgraph tools ["πŸ”Œ docketeer.tools"]
            Registry["Tool Registry"]
            CoreTools["workspace Β· chat Β· docket"]
            PluginTools["web, monty, mcp, ..."]
        end
        Registry --> CoreTools
        Registry --> PluginTools

        Docket -- triggers --> PluginTasks
        subgraph tasks ["πŸ”Œ docketeer.tasks"]
            PluginTasks["git backup, reverie, consolidation, ..."]
        end

        subgraph bands ["πŸ”Œ docketeer.bands"]
            Bands["wicket, atproto, ..."]
        end
        Bands -- signals --> Brain

        subgraph hooks ["πŸ”Œ docketeer.hooks"]
            Hooks["tunings, tasks, mcp, ..."]
        end
        Workspace -- file ops --> Hooks

        subgraph executor ["πŸ”Œ docketeer.executor"]
            Sandbox["bubblewrap, subprocess, ..."]
        end
        PluginTools --> Sandbox

        subgraph vault ["πŸ”Œ docketeer.vault"]
            Secrets["1password, ..."]
        end
        PluginTools --> Secrets
    end

    Sandbox --> Host["πŸ–₯️ Host System"]

    classDef plugin fill:#f0f4ff,stroke:#4a6fa5
    classDef core fill:#fff4e6,stroke:#c77b2a
    class API,ChatClient,Prompts,PluginTools,Sandbox,Secrets,PluginTasks,Bands plugin
    class Brain core
Loading

Lines

Everything the agent does happens on a line β€” a named, persistent context of reasoning with its own conversation history. Chat conversations, scheduled tasks, background research, and realtime event streams each run on their own lines. Lines are just names: a DM with chris uses the line chris, a channel uses general, reverie runs on reverie. A few more examples:

  • The agent schedules a task to research an API β€” it runs on the line api-research and builds up context across multiple tool-use turns without cluttering any chat.
  • A tuning watches GitHub webhooks for PRs across several repos β€” signals arrive on the line opensource, where the agent has ongoing context about each project.
  • The agent notices a thread worth following up on tomorrow β€” it schedules a nudge on the line chris so the reply lands in the same conversation.

All lines share the same workspace. Each line can have a context file at lines/{name}.md whose body gets injected as system context whenever that line is active β€” whether the message comes from a chat conversation, a scheduled task, or a realtime signal. This gives the agent standing instructions for that context ("only flag important emails", "notify Chris about external contributors") that it can update itself as it learns.

Brain

The Brain is the agentic loop at the center of Docketeer. It receives messages on a line, builds a system prompt, manages per-line conversation history, and runs a multi-turn tool-use loop against the configured inference backend. Each turn sends the conversation, system prompt blocks, and available tool definitions to the LLM and gets back text and/or tool calls β€” looping until the model responds with text or hits the tool-round limit. Everything else in the system either feeds into the Brain or is called by it.

Workspace

The agent's persistent filesystem β€” its long-term memory. Plugins can populate it with whatever files they need; for example, the docketeer-autonomy plugin writes SOUL.md, a daily journal, and per-person profiles here. Workspace tools let the agent read and write its own files.

Docket

A Redis-backed task scheduler that gives the agent autonomy. The built-in nudge task lets the agent schedule future prompts for itself β€” each scheduled task runs on a line with persistent conversation history. If the task specifies a line: and that line has a context file, the line's instructions are injected as system context. Task plugins (like docketeer-autonomy) can add their own recurring tasks.

Antenna

The realtime event feed system. Bands are persistent streaming connections to external services β€” docketeer-wicket connects to an SSE endpoint, docketeer-atproto connects to the Bluesky Jetstream WebSocket relay. Each band produces signals: structured events with a topic, timestamp, and payload.

Tunings tell the Antenna what to listen for and where to send it. Each tuning routes signals to a line β€” if that line has a context file at lines/{name}.md, the line's instructions are injected as system context alongside any notes in the tuning file's body. This means multiple tunings can share a line and its behavioral instructions. For example, several GitHub repo tunings might all deliver to the opensource line, which has instructions about when to notify the user vs. log silently.

The agent can set up and tear down tunings at runtime by writing files to tunings/ β€” no restarts needed. Line context files are read fresh on every signal delivery, so the agent can refine its own instructions over time.

Vault

The agent often needs secrets β€” API keys, tokens, passwords β€” to do useful work, but those values should never appear in the conversation context where they'd be visible in logs or could leak through tool results. The vault plugin gives the agent five tools (list_secrets, store_secret, generate_secret, delete_secret, capture_secret) that let it manage secrets by name without ever seeing the raw values. When the agent needs a secret inside a sandboxed command, it passes a secret_env mapping on run or shell and the executor resolves the names through the vault at the last moment, injecting values as environment variables that only the child process can see.

Plugin extension points

All plugins are discovered via standard Python entry points. Single-plugin groups (docketeer.inference, docketeer.chat, docketeer.executor, docketeer.vault, docketeer.search) auto-select when only one is installed, or can be chosen with an environment variable when several are available. Multi-plugin groups (docketeer.tools, docketeer.prompt, docketeer.tasks, docketeer.bands, docketeer.hooks) load everything they find.

Entry point group Cardinality Purpose
docketeer.inference single Inference backend β€” which LLM provider powers the agent
docketeer.chat single Chat backend β€” how the agent talks to people
docketeer.executor single, optional Command executor β€” sandboxed process execution on the host
docketeer.vault single, optional Secrets vault β€” store and resolve secrets without exposing values to the agent
docketeer.search single, optional Search index β€” semantic search over workspace files
docketeer.context multiple Context providers β€” inject per-user and per-line context into conversations
docketeer.tools multiple Tool plugins β€” capabilities the agent can use during its agentic loop
docketeer.prompt multiple Prompt providers β€” contribute blocks to the system prompt
docketeer.tasks multiple Task plugins β€” background work run by the Docket scheduler
docketeer.bands multiple Band plugins β€” realtime event stream sources (SSE, WebSocket, etc.)
docketeer.hooks multiple Workspace hooks β€” react to file operations in special directories (validate, commit, delete)

Packages

Docketeer's git repository is a uv workspace of packages endorsed by the authors, but they don't represent everything your Docketeer agent can be! You can send new plugin implementations by PR or build your own and install them alongside Docketeer to build your perfect agent.

Package PyPI Description
docketeer PyPI Core agent engine β€” workspace, scheduling, plugin discovery
docketeer-1password PyPI 1Password secret vault β€” store, generate, and resolve secrets
docketeer-agentskills PyPI Agent Skills β€” install, manage, and use packaged agent expertise
docketeer-anthropic PyPI Anthropic inference backend
docketeer-atproto PyPI ATProto Jetstream band β€” realtime Bluesky events via WebSocket
docketeer-autonomy PyPI Autonomous personality β€” reverie, consolidation, journaling, profiles
docketeer-bubblewrap PyPI Sandboxed command execution via bubblewrap
docketeer-deepinfra PyPI DeepInfra inference backend
docketeer-git PyPI Automatic git-backed workspace backups
docketeer-imap PyPI IMAP IDLE band β€” push-style email notifications from any IMAP server
docketeer-mcp PyPI MCP server support β€” connect to any MCP-compatible server
docketeer-monty PyPI Sandboxed Python execution via Monty
docketeer-rocketchat PyPI Rocket.Chat chat backend
docketeer-search PyPI Semantic workspace search via fastembed
docketeer-slack PyPI Slack chat backend
docketeer-subprocess PyPI Unsandboxed command execution for containers and non-Linux hosts
docketeer-tui PyPI Terminal chat backend
docketeer-web PyPI Web search, HTTP requests, file downloads
docketeer-wicket PyPI Wicket SSE band β€” realtime events via Server-Sent Events

Each package's README lists its tools and configuration variables.

Getting started

git clone https://github.com/chrisguidry/docketeer.git
cd docketeer
uv sync

Start Redis (used by Docket for task scheduling):

docker compose up -d

Set your inference backend's API key (and any plugin-specific variables β€” see each package's README):

# For the Anthropic backend:
export DOCKETEER_ANTHROPIC_API_KEY="sk-ant-..."

# For the DeepInfra backend:
export DOCKETEER_DEEPINFRA_API_KEY="..."

Docketeer uses DOCKETEER_CHAT_MODEL (defaulting to "balanced") to select a model tier for conversations. Each backend maps tier names (smart, balanced, fast) to its own model IDs, and you can override per backend with variables like DOCKETEER_ANTHROPIC_MODEL_SMART or DOCKETEER_DEEPINFRA_MODEL_BALANCED. Plugins may add their own model tier variables β€” see each package's README for details.

Run the agent:

docketeer start

About

A reasonably sized autonomous AI construction kit

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages