Skip to content

vaibhavpandeyvpz/hooman

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

169 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hooman

Hooman is a hackable AI agent toolkit for local workflows. It is built with TypeScript, Strands Agents SDK, and Ink.

Node.js TypeScript Ink Build GitHub Repo stars GitHub last commit

Hooman screenshot

It gives you a practical toolkit to build and run agent workflows:

  • a one-shot exec command for single prompts
  • a stateful chat interface for iterative sessions
  • a daemon command for channel-driven MCP automation
  • an Ink-powered configure workflow for app config, prompts, MCP servers, and installed skills
  • an acp command for running Hooman as an Agent Client Protocol (ACP) agent over stdio

Related

Looking for a focused web UI for chat and agent configuration with lighter surface on top of the same stack? See ZeroREADME.

Features

  • Multiple LLM providers: anthropic, bedrock, google, groq, moonshot, ollama, openai, xai
  • Local configuration under ~/.hooman
  • Optional web search tool with provider selection (brave, exa, firecrawl, serper, or tavily)
  • MCP server support via stdio, streamable-http, and sse
  • MCP server instructions support: server-provided instructions are appended to the agent system prompt
  • MCP channel notifications: hooman daemon subscribes to servers that advertise hooman/channel
  • Skill discovery from local ~/.hooman/skills folders
  • Bundled prompt harness toggles (behaviour, communication, execution, guardrails); coding guidance ships as the built-in hooman-coding skill
  • Built-in research sub-agent runner (research) with configurable concurrency
  • Toolkit-oriented architecture with configurable tools, prompts, memory, and transports
  • Interactive terminal UI for chat and configuration

Requirements

  • Node.js >= 24
  • npm for package installs and JavaScript tooling
  • Provider credentials or local model runtime depending on the LLM you choose

Usage

Fastest way to get started without cloning the repo:

npx hoomanjs configure
npx hoomanjs chat

# or install globally
npm i -g hoomanjs

Or with Bun:

bunx hoomanjs configure
bunx hoomanjs chat

Recommended first run:

  1. Run hooman configure to choose your LLM provider and model.
  2. Start chatting with hooman chat.
  3. Use hooman exec "your prompt" for one-off tasks.

Must have

For the best experience, set up both:

  1. MCP servers for on-demand tools in chat / exec (task APIs, messaging, schedulers, etc.).
  2. MCP channels for event-driven automation with hooman daemon (notifications become agent prompts).

Suggested MCP servers from this ecosystem:

  • cronmcp - lets Hooman schedule recurring prompts and automations, so routine checks and follow-ups run on time.
  • jiraxmcp - gives Hooman direct Jira Cloud access to search issues, update tickets, and help drive sprint workflows.
  • slackxmcp - connects Hooman to Slack so it can read channel context, draft updates, and post actions where your team already works.
  • tgfmcp - enables Telegram bot workflows, making it easy to route notifications and respond from agent-driven chats.
  • wappmcp - brings WhatsApp Web messaging into Hooman for customer or team communication automations.

For production deployments, still review permissions and use least-privilege credentials/tokens for each integration.

Install

npm install

Run locally:

npm run dev -- --help

Or use the dev alias:

npm run build
node dist/cli.js --help

Link the CLI locally:

npm link
hooman --help

Commands

hooman exec

Run a single prompt once.

hooman exec "Summarize the current repository"

Use a specific session id:

hooman exec "What changed?" --session my-session

Skip interactive tool approval (allows every tool call; use only when you trust the prompt and environment):

hooman exec "Summarize this repo" --yolo

Start in ask mode (narrower tool surface, no plan lifecycle tools; see Session mode):

hooman exec "Map the architecture" --mode ask

hooman chat

Start an interactive stateful chat session.

hooman chat

Optional initial prompt:

hooman chat "Help me prioritize the next task"

Resume or pin a session id:

hooman chat --session my-session

Skip the in-chat tool approval UI (same semantics as exec --yolo):

hooman chat --yolo

Start in ask mode:

hooman chat --mode ask

Session mode

exec, chat, and daemon accept -m / --mode with:

  • default (default): normal tool surface and approvals.
  • ask: read-oriented, narrower surface (similar to interactive plan mode) but without enter_plan_mode / exit_plan_mode.

In chat, /mode can also switch to plan (includes plan tools and a plan document workflow). ACP sessions can set hooman.sessionMode to default, plan, or ask.

hooman daemon

Run a long-lived daemon that always subscribes to MCP servers advertising the hooman/channel capability and feeds each received notification into the agent as a queued prompt.

hooman daemon

Resume or pin a session id:

hooman daemon --session my-daemon

Skip remote channel permission relay and allow every tool call from daemon turns (same risk profile as exec / chat with --yolo):

hooman daemon --yolo

Optional --mode ask matches exec / chat (narrow surface without plan lifecycle tools).

Log raw notification payloads:

hooman daemon --debug

Feature Flags

Runtime tool and prompt switches are controlled from config.json:

  • search.enabled
  • search.provider (brave, exa, firecrawl, serper, or tavily)
  • search.brave.apiKey
  • search.exa.apiKey
  • search.firecrawl.apiKey
  • search.serper.apiKey
  • search.tavily.apiKey
  • prompts.behaviour
  • prompts.communication
  • prompts.execution
  • prompts.guardrails
  • tools.todo.enabled
  • tools.fetch.enabled
  • tools.filesystem.enabled
  • tools.shell.enabled
  • tools.sleep.enabled
  • tools.memory.enabled
  • Memory and wiki embedding model URI is fixed in code as DEFAULT_EMBED_MODEL in src/core/config.ts (not configurable in config.json).
  • Local embed GPU selection uses HOOMAN_LLAMA_GPU: unset defaults to auto (CI forces CPU off); use false, off, none, disable, disabled, or 0 for CPU-only; or metal, vulkan, or cuda for an explicit backend. Optional HOOMAN_EMBED_CONTEXT_SIZE caps embedding context length (tokens).
  • tools.wiki.enabled
  • tools.agents.enabled (enables built-in run_agents tool)
  • tools.agents.concurrency (defaults to 3 when omitted on load; a freshly generated default config.json uses 2)

Long-term memory uses SQLite + sqlite-vec at $HOOMAN_HOME/memory.sqlite and local GGUF embeddings via node-llama-cpp (model cache under $HOOMAN_HOME/.models). The same sqlite-vec native extension requirements apply (see the sqlite-vec package and your platform notes).

With tools.wiki.enabled, the agent gets wiki_search only: semantic retrieval over the indexed knowledge base. Data lives under $HOOMAN_HOME/wiki/ with chunks and vectors in $HOOMAN_HOME/wiki/content.sqlite. The first embed may download the configured GGUF model into $HOOMAN_HOME/.models. Documents are ingested as PDF (via OpenDataLoader PDF, which needs Java 11+ on PATH) or DOCX (via mammoth).

hooman configure

Open the Ink configuration workflow.

hooman configure

The configure UI currently lets you:

  • edit app configuration values
  • choose search provider and set its API key
  • toggle bundled harness prompts (behaviour, communication, execution, guardrails)
  • edit instructions.md in your $VISUAL / $EDITOR (cross-platform fallback included)
  • add, edit, and delete MCP servers with confirmation
  • search, install, refresh, and remove skills

hooman acp

Run Hooman as an Agent Client Protocol (ACP) agent over stdio.

hooman acp

ACP notes:

  • ACP sessions are stored under the active Hooman data directory in acp-sessions/
  • ACP loads MCP servers passed on session/new and session/load, in addition to Hooman's local mcp.json
  • ACP session/new and session/load support _meta.userId and _meta.systemPrompt
  • when _meta.systemPrompt is provided, it is appended to the agent system prompt with a section break
  • session configuration includes hooman.sessionMode (default, plan, or ask); see Session mode

Configuration Layout

Hooman stores its data in:

~/.hooman/

Important files and folders:

  • config.json - app name, LLM provider/model, tool flags, memory/wiki settings, compaction
  • memory.sqlite - long-term memory vectors + rows (created when memory tools are used)
  • .models/ - downloaded GGUF embedding weights (memory + wiki embedders)
  • wiki/content.sqlite - wiki document store and semantic chunk index (when wiki is used)
  • instructions.md - system instructions used to build the agent prompt
  • mcp.json - MCP server definitions
  • skills/ - installed skills
  • sessions/ - persisted session data
  • acp-sessions/ - persisted ACP session metadata and message snapshots

Example config.json

The on-disk shape uses a non-empty llms array: each item has name, options (provider, model, params), and default. The bundled hooman-config skill documents the full schema.

{
  "name": "Hooman",
  "llms": [
    {
      "name": "Default",
      "options": {
        "provider": "ollama",
        "model": "gemma4:e4b",
        "params": {}
      },
      "default": true
    }
  ],
  "search": {
    "enabled": false,
    "provider": "brave",
    "brave": {},
    "exa": {},
    "firecrawl": {},
    "serper": {},
    "tavily": {}
  },
  "prompts": {
    "behaviour": true,
    "communication": true,
    "execution": true,
    "guardrails": true
  },
  "tools": {
    "todo": {
      "enabled": true
    },
    "fetch": {
      "enabled": true
    },
    "filesystem": {
      "enabled": true
    },
    "shell": {
      "enabled": true
    },
    "sleep": {
      "enabled": true
    },
    "memory": {
      "enabled": false
    },
    "wiki": {
      "enabled": false
    },
    "agents": {
      "enabled": true,
      "concurrency": 2
    }
  },
  "compaction": {
    "ratio": 0.75,
    "keep": 5
  }
}

Tool approvals are session-scoped and are not persisted in config.json.

Supported llms[].options.provider values registered in this release (see src/core/models/index.ts):

  • anthropic
  • bedrock
  • google
  • groq
  • moonshot
  • ollama
  • openai
  • xai

The LlmProvider enum in src/core/config.ts may list additional strings for forwards compatibility; unknown providers are not loaded at runtime.

Supported search.provider values:

  • brave
  • exa
  • firecrawl
  • serper
  • tavily

Provider Notes

Ollama

Good default for local usage. Example:

{
  "provider": "ollama",
  "model": "gemma4:e4b",
  "params": {}
}

OpenAI

Uses Strands OpenAIModel (Chat Completions). apiKey is optional if OPENAI_API_KEY is set. Use clientConfig for a custom base URL or other OpenAI client options (OpenAI-compatible proxies and gateways).

Example:

{
  "provider": "openai",
  "model": "gpt-5",
  "params": {
    "apiKey": "..."
  }
}

OpenAI-compatible gateways that put token usage on the last streamed chunk together with choices are handled via a small stream shim so usage still surfaces in the UI.

Anthropic

Uses Strands AnthropicModel (Anthropic Messages API). apiKey or authToken, optional baseURL and headers (merged into clientConfig), optional clientConfig, and model fields such as temperature and maxTokens. A prebuilt client is not configurable from JSON.

{
  "provider": "anthropic",
  "model": "claude-sonnet-4-20250514",
  "params": {
    "apiKey": "...",
    "temperature": 0.7
  }
}

Google

Uses Strands GoogleModel on top of @google/genai. Top-level options like apiKey, client, clientConfig, and builtInTools are supported; other values go into Google generation params.

{
  "provider": "google",
  "model": "gemini-2.5-flash",
  "params": {
    "apiKey": "...",
    "temperature": 0.7,
    "maxOutputTokens": 2048,
    "topP": 0.9,
    "topK": 40
  }
}

Bedrock

Supports region, clientConfig, and optional apiKey, with all other values forwarded as Bedrock model options.

{
  "provider": "bedrock",
  "model": "anthropic.claude-sonnet-4-20250514-v1:0",
  "params": {
    "region": "us-east-1",
    "clientConfig": {
      "profile": "dev",
      "maxAttempts": 3,
      "credentials": {
        "accessKeyId": "AKIA...",
        "secretAccessKey": "...",
        "sessionToken": "..."
      }
    },
    "temperature": 0.7,
    "maxTokens": 1024
  }
}

You can also rely on the AWS default credential chain (recommended) by setting environment variables such as AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and optionally AWS_SESSION_TOKEN.

Groq

Uses the Vercel AI SDK Groq provider (@ai-sdk/groq) on top of Strands VercelModel. Provider-specific settings apiKey, baseURL, and headers are picked up; other values are forwarded into the model config (temperature, maxTokens, etc.). Defaults to GROQ_API_KEY from the environment when no apiKey is supplied.

{
  "provider": "groq",
  "model": "gemma2-9b-it",
  "params": {
    "apiKey": "...",
    "temperature": 0.7
  }
}

Moonshot

Uses the Vercel AI SDK Moonshot provider (@ai-sdk/moonshotai) on top of Strands VercelModel. Provider-specific settings apiKey, baseURL, headers, and fetch are picked up; other values are forwarded into the model config (temperature, maxTokens, providerOptions, etc.). Defaults to MOONSHOT_API_KEY from the environment when no apiKey is supplied. Moonshot reasoning models such as kimi-k2-thinking can be configured through params.providerOptions.moonshotai.

{
  "provider": "moonshot",
  "model": "kimi-k2.5",
  "params": {
    "apiKey": "...",
    "temperature": 0.7
  }
}

xAI

Uses the Vercel AI SDK xAI provider (@ai-sdk/xai) on top of Strands VercelModel. Provider-specific settings apiKey, baseURL, and headers are picked up; other values are forwarded into the model config (temperature, maxTokens, etc.). Defaults to XAI_API_KEY from the environment when no apiKey is supplied.

{
  "provider": "xai",
  "model": "grok-4.20-non-reasoning",
  "params": {
    "apiKey": "...",
    "temperature": 0.7
  }
}

MCP Configuration

mcp.json is stored as:

{
  "mcpServers": {}
}

Example stdio server

{
  "mcpServers": {
    "filesystem": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
      "env": {
        "EXAMPLE": "1"
      },
      "cwd": "/tmp"
    }
  }
}

Example streamable HTTP server

{
  "mcpServers": {
    "remote": {
      "type": "streamable-http",
      "url": "https://example.com/mcp",
      "headers": {
        "Authorization": "Bearer token"
      }
    }
  }
}

Example SSE server

{
  "mcpServers": {
    "legacy": {
      "type": "sse",
      "url": "https://example.com/sse",
      "headers": {
        "Authorization": "Bearer token"
      }
    }
  }
}

MCP Notes

  • MCP server instructions from the protocol initialize response are appended to Hooman's system prompt, after local instructions.md and session-specific prompt overrides.
  • Hooman reads these instructions automatically from connected MCP servers when building the agent.
  • hooman daemon subscribes to MCP servers that advertise the experimental hooman/channel capability (always on; there is no opt-out flag).
  • Hooman also reads hooman/user, hooman/session, and hooman/thread capability paths so daemon turns preserve origin metadata from the source channel.
  • When a matching notification is received, Hooman uses params.content as the prompt if it is a string; otherwise it JSON-stringifies the notification params and sends that to the agent.
  • Daemon mode processes notifications sequentially and reuses the same agent session over time.
  • Tool calls from daemon turns are no longer blanket auto-approved: if the originating MCP server supports hooman/channel/permission, Hooman relays a remote approval request back to that source; otherwise the tool call is denied.
  • exec, chat, and daemon accept --yolo to bypass those approval paths and allow all tools without prompting or relay.

Skills

Skills are installed under:

~/.hooman/skills

Skills are discovered by scanning direct child directories for SKILL.md. The configure workflow can:

  • search the public skills catalog
  • install a skill from a source string, repo, URL, or local path
  • refresh installed skills
  • remove installed skills with confirmation

Development

Install dependencies:

npm install

Run the CLI:

npm run dev -- --help

Run typecheck:

npm run typecheck

License

MIT. See LICENSE.