Skill-powered AI agents implementing the Agent Skills specification with pydantic-ai.
SkillToolset is a pydantic-ai FunctionToolset that you attach to your own agent. It exposes a single execute_skill tool. When the agent calls it, a focused sub-agent spins up with only that skill's instructions and tools — then returns the result. The main agent never sees the skill's internal tools, so its tool space stays clean no matter how many skills you load.
This sub-agent architecture means each skill runs in isolation with its own system prompt, tools, and token budget. Skills don't interfere with each other, tool descriptions don't compete for attention, and failures in one skill can't confuse another.
- Sub-agent execution — Each skill runs in its own agent with dedicated instructions and tools
- Skill discovery — Scan filesystem paths for SKILL.md directories or load from Python entrypoints
- In-process tools — Attach pydantic-ai
Toolfunctions orAbstractToolsetinstances to skills - Per-skill state — Skills declare a Pydantic state model and namespace; state is passed to tools via
RunContextand tracked on the toolset - AG-UI protocol — State changes emit
StateDeltaEvent(JSON Patch), compatible with the AG-UI protocol - Script tools — Python, JavaScript, TypeScript, and shell scripts in
scripts/; Python scripts with amain()function are AST-parsed for typed tool schemas and executed viauv runwith PEP 723 dependency support - MCP integration — Wrap any MCP server (stdio, SSE, streamable HTTP) as a skill
- Signing and verification — Identity-based skill signing via sigstore
uv add haiku.skillsfrom pathlib import Path
from pydantic_ai import Agent
from haiku.skills import SkillToolset, build_system_prompt
toolset = SkillToolset(skill_paths=[Path("./skills")])
agent = Agent(
"anthropic:claude-sonnet-4-5-20250929",
instructions=build_system_prompt(toolset.skill_catalog),
toolsets=[toolset],
)
result = await agent.run("Analyze this dataset.")
print(result.output)Full documentation at ggozad.github.io/haiku.skills.
MIT