Hooman is a hackable AI agent toolkit for local workflows. It is built with TypeScript, Strands Agents SDK, and Ink.
It gives you a practical toolkit to build and run agent workflows:
- a one-shot
execcommand for single prompts - a stateful
chatinterface for iterative sessions - a
daemoncommand for channel-driven MCP automation - an Ink-powered
configureworkflow for app config, prompts, MCP servers, and installed skills - an
acpcommand for running Hooman as an Agent Client Protocol (ACP) agent over stdio
Looking for a focused web UI for chat and agent configuration with lighter surface on top of the same stack? See Zero — README.
- Multiple LLM providers:
anthropic,bedrock,google,groq,moonshot,ollama,openai,xai - Local configuration under
~/.hooman - Optional web search tool with provider selection (
brave,exa,firecrawl,serper, ortavily) - MCP server support via
stdio,streamable-http, andsse - MCP server
instructionssupport: server-provided instructions are appended to the agent system prompt - MCP channel notifications:
hooman daemonsubscribes to servers that advertisehooman/channel - Skill discovery from local
~/.hooman/skillsfolders - Bundled prompt harness toggles (
behaviour,communication,execution,guardrails); coding guidance ships as the built-inhooman-codingskill - Built-in research sub-agent runner (
research) with configurable concurrency - Toolkit-oriented architecture with configurable tools, prompts, memory, and transports
- Interactive terminal UI for chat and configuration
- Node.js
>= 24 - npm for package installs and JavaScript tooling
- Provider credentials or local model runtime depending on the LLM you choose
Fastest way to get started without cloning the repo:
npx hoomanjs configure
npx hoomanjs chat
# or install globally
npm i -g hoomanjsOr with Bun:
bunx hoomanjs configure
bunx hoomanjs chatRecommended first run:
- Run
hooman configureto choose your LLM provider and model. - Start chatting with
hooman chat. - Use
hooman exec "your prompt"for one-off tasks.
For the best experience, set up both:
- MCP servers for on-demand tools in
chat/exec(task APIs, messaging, schedulers, etc.). - MCP channels for event-driven automation with
hooman daemon(notifications become agent prompts).
Suggested MCP servers from this ecosystem:
cronmcp- lets Hooman schedule recurring prompts and automations, so routine checks and follow-ups run on time.jiraxmcp- gives Hooman direct Jira Cloud access to search issues, update tickets, and help drive sprint workflows.slackxmcp- connects Hooman to Slack so it can read channel context, draft updates, and post actions where your team already works.tgfmcp- enables Telegram bot workflows, making it easy to route notifications and respond from agent-driven chats.wappmcp- brings WhatsApp Web messaging into Hooman for customer or team communication automations.
For production deployments, still review permissions and use least-privilege credentials/tokens for each integration.
npm installRun locally:
npm run dev -- --helpOr use the dev alias:
npm run build
node dist/cli.js --helpLink the CLI locally:
npm link
hooman --helpRun a single prompt once.
hooman exec "Summarize the current repository"Use a specific session id:
hooman exec "What changed?" --session my-sessionSkip interactive tool approval (allows every tool call; use only when you trust the prompt and environment):
hooman exec "Summarize this repo" --yoloStart in ask mode (narrower tool surface, no plan lifecycle tools; see Session mode):
hooman exec "Map the architecture" --mode askStart an interactive stateful chat session.
hooman chatOptional initial prompt:
hooman chat "Help me prioritize the next task"Resume or pin a session id:
hooman chat --session my-sessionSkip the in-chat tool approval UI (same semantics as exec --yolo):
hooman chat --yoloStart in ask mode:
hooman chat --mode askexec, chat, and daemon accept -m / --mode with:
default(default): normal tool surface and approvals.ask: read-oriented, narrower surface (similar to interactive plan mode) but withoutenter_plan_mode/exit_plan_mode.
In chat, /mode can also switch to plan (includes plan tools and a plan document workflow). ACP sessions can set hooman.sessionMode to default, plan, or ask.
Run a long-lived daemon that always subscribes to MCP servers advertising the hooman/channel capability and feeds each received notification into the agent as a queued prompt.
hooman daemonResume or pin a session id:
hooman daemon --session my-daemonSkip remote channel permission relay and allow every tool call from daemon turns (same risk profile as exec / chat with --yolo):
hooman daemon --yoloOptional --mode ask matches exec / chat (narrow surface without plan lifecycle tools).
Log raw notification payloads:
hooman daemon --debugRuntime tool and prompt switches are controlled from config.json:
search.enabledsearch.provider(brave,exa,firecrawl,serper, ortavily)search.brave.apiKeysearch.exa.apiKeysearch.firecrawl.apiKeysearch.serper.apiKeysearch.tavily.apiKeyprompts.behaviourprompts.communicationprompts.executionprompts.guardrailstools.todo.enabledtools.fetch.enabledtools.filesystem.enabledtools.shell.enabledtools.sleep.enabledtools.memory.enabled- Memory and wiki embedding model URI is fixed in code as
DEFAULT_EMBED_MODELinsrc/core/config.ts(not configurable inconfig.json). - Local embed GPU selection uses
HOOMAN_LLAMA_GPU: unset defaults toauto(CI forces CPU off); usefalse,off,none,disable,disabled, or0for CPU-only; ormetal,vulkan, orcudafor an explicit backend. OptionalHOOMAN_EMBED_CONTEXT_SIZEcaps embedding context length (tokens). tools.wiki.enabledtools.agents.enabled(enables built-inrun_agentstool)tools.agents.concurrency(defaults to3when omitted on load; a freshly generated defaultconfig.jsonuses2)
Long-term memory uses SQLite + sqlite-vec at $HOOMAN_HOME/memory.sqlite and local GGUF embeddings via node-llama-cpp (model cache under $HOOMAN_HOME/.models). The same sqlite-vec native extension requirements apply (see the sqlite-vec package and your platform notes).
With tools.wiki.enabled, the agent gets wiki_search only: semantic retrieval over the indexed knowledge base. Data lives under $HOOMAN_HOME/wiki/ with chunks and vectors in $HOOMAN_HOME/wiki/content.sqlite. The first embed may download the configured GGUF model into $HOOMAN_HOME/.models. Documents are ingested as PDF (via OpenDataLoader PDF, which needs Java 11+ on PATH) or DOCX (via mammoth).
Open the Ink configuration workflow.
hooman configureThe configure UI currently lets you:
- edit app configuration values
- choose search provider and set its API key
- toggle bundled harness prompts (
behaviour,communication,execution,guardrails) - edit
instructions.mdin your$VISUAL/$EDITOR(cross-platform fallback included) - add, edit, and delete MCP servers with confirmation
- search, install, refresh, and remove skills
Run Hooman as an Agent Client Protocol (ACP) agent over stdio.
hooman acpACP notes:
- ACP sessions are stored under the active Hooman data directory in
acp-sessions/ - ACP loads MCP servers passed on
session/newandsession/load, in addition to Hooman's localmcp.json - ACP
session/newandsession/loadsupport_meta.userIdand_meta.systemPrompt - when
_meta.systemPromptis provided, it is appended to the agent system prompt with a section break - session configuration includes
hooman.sessionMode(default,plan, orask); see Session mode
Hooman stores its data in:
~/.hooman/
Important files and folders:
config.json- app name, LLM provider/model, tool flags, memory/wiki settings, compactionmemory.sqlite- long-term memory vectors + rows (created when memory tools are used).models/- downloaded GGUF embedding weights (memory + wiki embedders)wiki/content.sqlite- wiki document store and semantic chunk index (when wiki is used)instructions.md- system instructions used to build the agent promptmcp.json- MCP server definitionsskills/- installed skillssessions/- persisted session dataacp-sessions/- persisted ACP session metadata and message snapshots
The on-disk shape uses a non-empty llms array: each item has name, options (provider, model, params), and default. The bundled hooman-config skill documents the full schema.
{
"name": "Hooman",
"llms": [
{
"name": "Default",
"options": {
"provider": "ollama",
"model": "gemma4:e4b",
"params": {}
},
"default": true
}
],
"search": {
"enabled": false,
"provider": "brave",
"brave": {},
"exa": {},
"firecrawl": {},
"serper": {},
"tavily": {}
},
"prompts": {
"behaviour": true,
"communication": true,
"execution": true,
"guardrails": true
},
"tools": {
"todo": {
"enabled": true
},
"fetch": {
"enabled": true
},
"filesystem": {
"enabled": true
},
"shell": {
"enabled": true
},
"sleep": {
"enabled": true
},
"memory": {
"enabled": false
},
"wiki": {
"enabled": false
},
"agents": {
"enabled": true,
"concurrency": 2
}
},
"compaction": {
"ratio": 0.75,
"keep": 5
}
}Tool approvals are session-scoped and are not persisted in config.json.
Supported llms[].options.provider values registered in this release (see src/core/models/index.ts):
anthropicbedrockgooglegroqmoonshotollamaopenaixai
The LlmProvider enum in src/core/config.ts may list additional strings for forwards compatibility; unknown providers are not loaded at runtime.
Supported search.provider values:
braveexafirecrawlserpertavily
Good default for local usage. Example:
{
"provider": "ollama",
"model": "gemma4:e4b",
"params": {}
}Uses Strands OpenAIModel (Chat Completions). apiKey is optional if OPENAI_API_KEY is set. Use clientConfig for a custom base URL or other OpenAI client options (OpenAI-compatible proxies and gateways).
Example:
{
"provider": "openai",
"model": "gpt-5",
"params": {
"apiKey": "..."
}
}OpenAI-compatible gateways that put token usage on the last streamed chunk together with choices are handled via a small stream shim so usage still surfaces in the UI.
Uses Strands AnthropicModel (Anthropic Messages API). apiKey or authToken, optional baseURL and headers (merged into clientConfig), optional clientConfig, and model fields such as temperature and maxTokens. A prebuilt client is not configurable from JSON.
{
"provider": "anthropic",
"model": "claude-sonnet-4-20250514",
"params": {
"apiKey": "...",
"temperature": 0.7
}
}Uses Strands GoogleModel on top of @google/genai. Top-level options like apiKey, client, clientConfig, and builtInTools are supported; other values go into Google generation params.
{
"provider": "google",
"model": "gemini-2.5-flash",
"params": {
"apiKey": "...",
"temperature": 0.7,
"maxOutputTokens": 2048,
"topP": 0.9,
"topK": 40
}
}Supports region, clientConfig, and optional apiKey, with all other values forwarded as Bedrock model options.
{
"provider": "bedrock",
"model": "anthropic.claude-sonnet-4-20250514-v1:0",
"params": {
"region": "us-east-1",
"clientConfig": {
"profile": "dev",
"maxAttempts": 3,
"credentials": {
"accessKeyId": "AKIA...",
"secretAccessKey": "...",
"sessionToken": "..."
}
},
"temperature": 0.7,
"maxTokens": 1024
}
}You can also rely on the AWS default credential chain (recommended) by setting environment variables such as AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and optionally AWS_SESSION_TOKEN.
Uses the Vercel AI SDK Groq provider (@ai-sdk/groq) on top of Strands VercelModel. Provider-specific settings apiKey, baseURL, and headers are picked up; other values are forwarded into the model config (temperature, maxTokens, etc.). Defaults to GROQ_API_KEY from the environment when no apiKey is supplied.
{
"provider": "groq",
"model": "gemma2-9b-it",
"params": {
"apiKey": "...",
"temperature": 0.7
}
}Uses the Vercel AI SDK Moonshot provider (@ai-sdk/moonshotai) on top of Strands VercelModel. Provider-specific settings apiKey, baseURL, headers, and fetch are picked up; other values are forwarded into the model config (temperature, maxTokens, providerOptions, etc.). Defaults to MOONSHOT_API_KEY from the environment when no apiKey is supplied. Moonshot reasoning models such as kimi-k2-thinking can be configured through params.providerOptions.moonshotai.
{
"provider": "moonshot",
"model": "kimi-k2.5",
"params": {
"apiKey": "...",
"temperature": 0.7
}
}Uses the Vercel AI SDK xAI provider (@ai-sdk/xai) on top of Strands VercelModel. Provider-specific settings apiKey, baseURL, and headers are picked up; other values are forwarded into the model config (temperature, maxTokens, etc.). Defaults to XAI_API_KEY from the environment when no apiKey is supplied.
{
"provider": "xai",
"model": "grok-4.20-non-reasoning",
"params": {
"apiKey": "...",
"temperature": 0.7
}
}mcp.json is stored as:
{
"mcpServers": {}
}{
"mcpServers": {
"filesystem": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
"env": {
"EXAMPLE": "1"
},
"cwd": "/tmp"
}
}
}{
"mcpServers": {
"remote": {
"type": "streamable-http",
"url": "https://example.com/mcp",
"headers": {
"Authorization": "Bearer token"
}
}
}
}{
"mcpServers": {
"legacy": {
"type": "sse",
"url": "https://example.com/sse",
"headers": {
"Authorization": "Bearer token"
}
}
}
}- MCP server
instructionsfrom the protocolinitializeresponse are appended to Hooman's system prompt, after localinstructions.mdand session-specific prompt overrides. - Hooman reads these instructions automatically from connected MCP servers when building the agent.
hooman daemonsubscribes to MCP servers that advertise the experimentalhooman/channelcapability (always on; there is no opt-out flag).- Hooman also reads
hooman/user,hooman/session, andhooman/threadcapability paths so daemon turns preserve origin metadata from the source channel. - When a matching notification is received, Hooman uses
params.contentas the prompt if it is a string; otherwise it JSON-stringifies the notification params and sends that to the agent. - Daemon mode processes notifications sequentially and reuses the same agent session over time.
- Tool calls from daemon turns are no longer blanket auto-approved: if the originating MCP server supports
hooman/channel/permission, Hooman relays a remote approval request back to that source; otherwise the tool call is denied. exec,chat, anddaemonaccept--yoloto bypass those approval paths and allow all tools without prompting or relay.
Skills are installed under:
~/.hooman/skills
Skills are discovered by scanning direct child directories for SKILL.md.
The configure workflow can:
- search the public skills catalog
- install a skill from a source string, repo, URL, or local path
- refresh installed skills
- remove installed skills with confirmation
Install dependencies:
npm installRun the CLI:
npm run dev -- --helpRun typecheck:
npm run typecheckMIT. See LICENSE.
