A capability is a reusable, composable unit of agent behavior. Instead of threading multiple arguments through your Agent constructor — instructions here, model settings there, a toolset somewhere else, a history processor on yet another parameter — you can bundle related behavior into a single capability and pass it via the [capabilities][pydantic_ai.agent.Agent.init] parameter.
Capabilities can provide any combination of:
- Tools — via toolsets or builtin tools
- Lifecycle hooks — intercept and modify model requests, tool calls, and the overall run
- Instructions — static or dynamic instruction additions
- Model settings — static or per-step model settings
This makes them the primary extension point for Pydantic AI. Whether you're building a memory system, a guardrail, a cost tracker, or an approval workflow, a capability is the right abstraction.
Pydantic AI ships with several capabilities that cover common needs:
| Capability | What it provides | Spec |
|---|---|---|
[Thinking][pydantic_ai.capabilities.Thinking] |
Enables model thinking/reasoning at configurable effort | Yes |
[Hooks][pydantic_ai.capabilities.Hooks] |
Decorator-based lifecycle hook registration | — |
[WebSearch][pydantic_ai.capabilities.WebSearch] |
Web search — builtin when supported, local fallback otherwise | Yes |
[WebFetch][pydantic_ai.capabilities.WebFetch] |
URL fetching — builtin when supported, custom local fallback | Yes |
[ImageGeneration][pydantic_ai.capabilities.ImageGeneration] |
Image generation — builtin when supported, custom local fallback | Yes |
[MCP][pydantic_ai.capabilities.MCP] |
MCP server — builtin when supported, direct connection otherwise | Yes |
[PrepareTools][pydantic_ai.capabilities.PrepareTools] |
Filters or modifies tool definitions per step | — |
[PrefixTools][pydantic_ai.capabilities.PrefixTools] |
Wraps a capability and prefixes its tool names | Yes |
[BuiltinTool][pydantic_ai.capabilities.BuiltinTool] |
Registers a builtin tool with the agent | Yes |
[Toolset][pydantic_ai.capabilities.Toolset] |
Wraps an [AbstractToolset][pydantic_ai.toolsets.AbstractToolset] |
— |
[HistoryProcessor][pydantic_ai.capabilities.HistoryProcessor] |
Wraps a history processor | — |
The Spec column indicates whether the capability can be used in agent specs (YAML/JSON). Capabilities marked — take non-serializable arguments (callables, toolset objects) and can only be used in Python code.
from pydantic_ai import Agent
from pydantic_ai.capabilities import Thinking, WebSearch
agent = Agent(
'anthropic:claude-opus-4-6',
instructions='You are a research assistant. Be thorough and cite sources.',
capabilities=[
Thinking(effort='high'),
WebSearch(),
],
)Instructions and model settings are configured directly via the instructions and model_settings parameters on Agent (or AgentSpec). Capabilities are for behavior that goes beyond simple configuration — tools, lifecycle hooks, and custom extensions. They compose well, especially when you want to reuse the same configuration across multiple agents or load it from a spec file.
The [Thinking][pydantic_ai.capabilities.Thinking] capability enables model thinking/reasoning at a configurable effort level. It's the simplest way to enable thinking across providers:
from pydantic_ai import Agent
from pydantic_ai.capabilities import Thinking
agent = Agent('anthropic:claude-sonnet-4-6', capabilities=[Thinking(effort='high')])
result = agent.run_sync('What is the capital of France?')
print(result.output)
#> The capital of France is Paris.See Thinking for provider-specific details and the unified thinking settings.
The [Hooks][pydantic_ai.capabilities.Hooks] capability provides decorator-based lifecycle hook registration — the easiest way to intercept model requests, tool calls, and other events without subclassing [AbstractCapability][pydantic_ai.capabilities.AbstractCapability]:
from pydantic_ai import Agent, RunContext
from pydantic_ai.capabilities.hooks import Hooks
from pydantic_ai.models import ModelRequestContext
hooks = Hooks()
@hooks.on.before_model_request
async def log_request(ctx: RunContext[None], request_context: ModelRequestContext) -> ModelRequestContext:
print(f'Sending {len(request_context.messages)} messages')
return request_context
agent = Agent('openai:gpt-5.2', capabilities=[hooks])See the dedicated Hooks page for the full API: decorator and constructor registration, timeouts, tool filtering, wrap hooks, per-event hooks, and more.
[WebSearch][pydantic_ai.capabilities.WebSearch], [WebFetch][pydantic_ai.capabilities.WebFetch], [ImageGeneration][pydantic_ai.capabilities.ImageGeneration], and [MCP][pydantic_ai.capabilities.MCP] provide model-agnostic access to common tool types. When the model supports the tool natively (as a builtin tool), it's used directly. When it doesn't, a local function tool handles it instead — so your agent works across providers without code changes.
Each accepts builtin and local keyword arguments to control which side is used:
from pydantic_ai import Agent
from pydantic_ai.capabilities import MCP, WebFetch, WebSearch
agent = Agent(
'openai:gpt-5.2',
capabilities=[
# Auto-detects DuckDuckGo as local fallback
WebSearch(),
# Builtin URL fetching; provide local= for fallback
WebFetch(),
# Auto-detects transport from URL
MCP(url='https://mcp.example.com/api'),
],
)To force builtin-only (errors on unsupported models instead of falling back to local):
MCP(url='https://mcp.example.com/api', local=False)To force local-only (never use the builtin, even when the model supports it):
MCP(url='https://mcp.example.com/api', builtin=False)Constraint fields like allowed_domains or blocked_domains require the builtin — the local fallback can't enforce them. When these are set and the model doesn't support the builtin, a [UserError][pydantic_ai.exceptions.UserError] is raised:
# Only search example.com — requires builtin support
WebSearch(allowed_domains=['example.com'])All of these capabilities are subclasses of [BuiltinOrLocalTool][pydantic_ai.capabilities.BuiltinOrLocalTool], which you can use directly or subclass to build your own provider-adaptive tools. For example, to pair [CodeExecutionTool][pydantic_ai.builtin_tools.CodeExecutionTool] with a local fallback:
from pydantic_ai.builtin_tools import CodeExecutionTool
from pydantic_ai.capabilities import BuiltinOrLocalTool
cap = BuiltinOrLocalTool(builtin=CodeExecutionTool(), local=my_local_executor)[PrepareTools][pydantic_ai.capabilities.PrepareTools] wraps a [ToolsPrepareFunc][pydantic_ai.tools.ToolsPrepareFunc] as a capability, for filtering or modifying tool definitions per step:
from pydantic_ai import Agent
from pydantic_ai.capabilities import PrepareTools
from pydantic_ai.tools import RunContext, ToolDefinition
async def hide_dangerous(ctx: RunContext[None], tool_defs: list[ToolDefinition]) -> list[ToolDefinition]:
return [td for td in tool_defs if not td.name.startswith('delete_')]
agent = Agent('openai:gpt-5.2', capabilities=[PrepareTools(hide_dangerous)])
@agent.tool_plain
def delete_file(path: str) -> str:
"""Delete a file."""
return f'deleted {path}'
@agent.tool_plain
def read_file(path: str) -> str:
"""Read a file."""
return f'contents of {path}'
result = agent.run_sync('hello')
# The model only sees `read_file`, not `delete_file`For more complex tool preparation logic, see Tool preparation under lifecycle hooks.
[PrefixTools][pydantic_ai.capabilities.PrefixTools] wraps another capability and prefixes all of its tool names, useful for namespacing when composing multiple capabilities that might have conflicting tool names:
from pydantic_ai import Agent
from pydantic_ai.capabilities import MCP, PrefixTools
agent = Agent(
'openai:gpt-5.2',
capabilities=[
PrefixTools(MCP(url='https://api1.example.com'), prefix='api1'),
PrefixTools(MCP(url='https://api2.example.com'), prefix='api2'),
],
)Every [AbstractCapability][pydantic_ai.capabilities.AbstractCapability] has a convenience method [prefix_tools][pydantic_ai.capabilities.AbstractCapability.prefix_tools] that returns a [PrefixTools][pydantic_ai.capabilities.PrefixTools] wrapper:
MCP(url='https://mcp.example.com/api').prefix_tools('mcp')To build your own capability, subclass [AbstractCapability][pydantic_ai.capabilities.AbstractCapability] and override the methods you need. There are two categories: configuration methods that are called at agent construction (except [get_wrapper_toolset][pydantic_ai.capabilities.AbstractCapability.get_wrapper_toolset] which is called per-run), and lifecycle hooks that fire during each run.
A capability that provides tools returns a toolset from [get_toolset][pydantic_ai.capabilities.AbstractCapability.get_toolset]. This can be a pre-built [AbstractToolset][pydantic_ai.toolsets.AbstractToolset] instance, or a callable that receives [RunContext][pydantic_ai.tools.RunContext] and returns one dynamically:
from dataclasses import dataclass
from typing import Any
from pydantic_ai import Agent
from pydantic_ai.capabilities import AbstractCapability
from pydantic_ai.toolsets import AgentToolset, FunctionToolset
math_toolset = FunctionToolset()
@math_toolset.tool_plain
def add(a: float, b: float) -> float:
"""Add two numbers."""
return a + b
@math_toolset.tool_plain
def multiply(a: float, b: float) -> float:
"""Multiply two numbers."""
return a * b
@dataclass
class MathTools(AbstractCapability[Any]):
"""Provides basic math operations."""
def get_toolset(self) -> AgentToolset[Any] | None:
return math_toolset
agent = Agent('openai:gpt-5.2', capabilities=[MathTools()])
result = agent.run_sync('What is 2 + 3?')
print(result.output)
#> The answer is 5.0For builtin tools, override [get_builtin_tools][pydantic_ai.capabilities.AbstractCapability.get_builtin_tools] to return a sequence of [AgentBuiltinTool][pydantic_ai.tools.AgentBuiltinTool] instances (which includes both [AbstractBuiltinTool][pydantic_ai.builtin_tools.AbstractBuiltinTool] objects and callables that receive [RunContext][pydantic_ai.tools.RunContext]).
[get_wrapper_toolset][pydantic_ai.capabilities.AbstractCapability.get_wrapper_toolset] lets a capability wrap the agent's entire assembled toolset with a WrapperToolset. This is more powerful than providing tools — it can intercept tool execution, add logging, or apply cross-cutting behavior.
The wrapper receives the combined non-output toolset (after any agent-level [prepare_tools][pydantic_ai.tools.ToolsPrepareFunc] wrapping). Output tools are added separately and are not affected.
from dataclasses import dataclass
from typing import Any
from pydantic_ai import Agent
from pydantic_ai.capabilities import AbstractCapability
from pydantic_ai.toolsets import AbstractToolset
from pydantic_ai.toolsets.wrapper import WrapperToolset
@dataclass
class LoggingToolset(WrapperToolset[Any]):
"""Logs all tool calls."""
async def call_tool(
self, tool_name: str, tool_args: dict[str, Any], *args: Any, **kwargs: Any
) -> Any:
print(f' Calling tool: {tool_name}')
return await super().call_tool(tool_name, tool_args, *args, **kwargs)
@dataclass
class LogToolCalls(AbstractCapability[Any]):
"""Wraps the agent's toolset to log all tool calls."""
def get_wrapper_toolset(self, toolset: AbstractToolset[Any]) -> AbstractToolset[Any]:
return LoggingToolset(wrapped=toolset)
agent = Agent('openai:gpt-5.2', capabilities=[LogToolCalls()])
@agent.tool_plain
def greet(name: str) -> str:
"""Greet someone."""
return f'Hello, {name}!'
result = agent.run_sync('hello')
# Tool calls are logged as they happen!!! note
get_wrapper_toolset wraps the non-output toolset once per run (during toolset assembly), intercepting tool execution. This is different from the prepare_tools hook, which operates on tool definitions per step and controls visibility rather than execution.
[get_instructions][pydantic_ai.capabilities.AbstractCapability.get_instructions] adds instructions to the agent. Since it's called once at agent construction, return a callable if you need dynamic values:
from dataclasses import dataclass
from datetime import datetime
from typing import Any
from pydantic_ai import Agent, RunContext
from pydantic_ai.capabilities import AbstractCapability
@dataclass
class KnowsCurrentTime(AbstractCapability[Any]):
"""Tells the agent what time it is."""
def get_instructions(self):
def _get_time(ctx: RunContext[Any]) -> str:
return f'The current date and time is {datetime.now().isoformat()}.'
return _get_time
agent = Agent('openai:gpt-5.2', capabilities=[KnowsCurrentTime()])
result = agent.run_sync('What time is it?')
print(result.output)
#> The current time is 3:45 PM.Instructions can also use template strings (TemplateStr('Hello {{name}}')) for Handlebars-style templates rendered against the agent's dependencies. In Python code, a callable with [RunContext][pydantic_ai.tools.RunContext] is generally preferred for IDE autocomplete.
[get_model_settings][pydantic_ai.capabilities.AbstractCapability.get_model_settings] returns model settings as a dict or a callable for per-step settings.
When model settings need to vary per step — for example, enabling thinking only on retry — return a callable:
from dataclasses import dataclass
from pydantic_ai import Agent, RunContext
from pydantic_ai.capabilities import AbstractCapability
from pydantic_ai.settings import ModelSettings
@dataclass
class ThinkingOnRetry(AbstractCapability[None]):
"""Enables thinking mode when the agent is retrying."""
def get_model_settings(self):
def resolve(ctx: RunContext[None]) -> ModelSettings:
if ctx.run_step > 1:
return ModelSettings(thinking='high')
return ModelSettings()
return resolve
agent = Agent('openai:gpt-5.2', capabilities=[ThinkingOnRetry()])
result = agent.run_sync('hello')
print(result.output)
#> Hello! How can I help you today?The callable receives a [RunContext][pydantic_ai.tools.RunContext] where ctx.model_settings contains the merged result of all layers resolved before this capability (model defaults and agent-level settings).
| Method | Return type | Purpose |
|---|---|---|
[get_toolset()][pydantic_ai.capabilities.AbstractCapability.get_toolset] |
[AgentToolset][pydantic_ai.toolsets.AgentToolset] | None |
A toolset to register (or a callable for dynamic toolsets) |
[get_builtin_tools()][pydantic_ai.capabilities.AbstractCapability.get_builtin_tools] |
Sequence[[AgentBuiltinTool][pydantic_ai.tools.AgentBuiltinTool]] |
Builtin tools to register (including callables) |
[get_wrapper_toolset()][pydantic_ai.capabilities.AbstractCapability.get_wrapper_toolset] |
[AbstractToolset][pydantic_ai.toolsets.AbstractToolset] | None |
Wrap the agent's assembled toolset |
[get_instructions()][pydantic_ai.capabilities.AbstractCapability.get_instructions] |
[AgentInstructions][pydantic_ai.agent.AgentInstructions] | None |
Instructions (static strings, template strings, or callables) |
[get_model_settings()][pydantic_ai.capabilities.AbstractCapability.get_model_settings] |
[AgentModelSettings][pydantic_ai.agent.AgentModelSettings] | None |
Model settings dict, or a callable for per-step settings |
Capabilities can hook into five lifecycle points, each with up to four variants:
before_*— fires before the action, can modify inputsafter_*— fires after the action succeeds (in reverse capability order), can modify outputswrap_*— full middleware control: receives ahandlercallable and decides whether/how to call iton_*_error— fires when the action fails (afterwrap_*has had its chance to recover), can observe, transform, or recover from errors
!!! tip
For quick, application-level hooks without subclassing, use the Hooks capability instead.
| Hook | Signature | Purpose |
|---|---|---|
[before_run][pydantic_ai.capabilities.AbstractCapability.before_run] |
(ctx: [RunContext][pydantic_ai.tools.RunContext]) -> None |
Observe-only notification that a run is starting |
[after_run][pydantic_ai.capabilities.AbstractCapability.after_run] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, result: [AgentRunResult][pydantic_ai.run.AgentRunResult]) -> [AgentRunResult][pydantic_ai.run.AgentRunResult] |
Modify the final result |
[wrap_run][pydantic_ai.capabilities.AbstractCapability.wrap_run] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, handler: [WrapRunHandler][pydantic_ai.capabilities.WrapRunHandler]) -> [AgentRunResult][pydantic_ai.run.AgentRunResult] |
Wrap the entire run |
[on_run_error][pydantic_ai.capabilities.AbstractCapability.on_run_error] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, error: BaseException) -> [AgentRunResult][pydantic_ai.run.AgentRunResult] |
Handle run errors (see error hooks) |
wrap_run supports error recovery: if handler() raises and wrap_run catches the exception and returns a result instead, the error is suppressed and the recovery result is used. This works with both [agent.run()][pydantic_ai.agent.AbstractAgent.run] and [agent.iter()][pydantic_ai.agent.Agent.iter].
| Hook | Signature | Purpose |
|---|---|---|
[before_node_run][pydantic_ai.capabilities.AbstractCapability.before_node_run] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, node: [AgentNode][pydantic_ai.capabilities.AgentNode]) -> [AgentNode][pydantic_ai.capabilities.AgentNode] |
Observe or replace the node before execution |
[after_node_run][pydantic_ai.capabilities.AbstractCapability.after_node_run] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, node: [AgentNode][pydantic_ai.capabilities.AgentNode], result: [NodeResult][pydantic_ai.capabilities.NodeResult]) -> [NodeResult][pydantic_ai.capabilities.NodeResult] |
Modify the result (next node or [End][pydantic_graph.nodes.End]) |
[wrap_node_run][pydantic_ai.capabilities.AbstractCapability.wrap_node_run] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, node: [AgentNode][pydantic_ai.capabilities.AgentNode], handler: [WrapNodeRunHandler][pydantic_ai.capabilities.WrapNodeRunHandler]) -> [NodeResult][pydantic_ai.capabilities.NodeResult] |
Wrap each graph node execution |
[on_node_run_error][pydantic_ai.capabilities.AbstractCapability.on_node_run_error] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, node: [AgentNode][pydantic_ai.capabilities.AgentNode], error: BaseException) -> [NodeResult][pydantic_ai.capabilities.NodeResult] |
Handle node errors (see error hooks) |
[wrap_node_run][pydantic_ai.capabilities.AbstractCapability.wrap_node_run] fires for every node in the agent graph ([UserPromptNode][pydantic_ai.agent.UserPromptNode], [ModelRequestNode][pydantic_ai.agent.ModelRequestNode], [CallToolsNode][pydantic_ai.agent.CallToolsNode]). Override this to observe node transitions, add per-step logging, or modify graph progression:
!!! note
wrap_node_run hooks are called automatically by [agent.run()][pydantic_ai.agent.AbstractAgent.run], [agent.run_stream()][pydantic_ai.agent.AbstractAgent.run_stream], and [agent_run.next()][pydantic_ai.run.AgentRun.next]. However, they are not called when iterating with bare async for node in agent_run: over [agent.iter()][pydantic_ai.agent.Agent.iter], since that uses the graph run's internal iteration. Always use agent_run.next(node) to advance the run if you need wrap_node_run hooks to fire.
from __future__ import annotations
from dataclasses import dataclass, field
from typing import Any
from pydantic_ai import Agent, RunContext
from pydantic_ai.capabilities import (
AbstractCapability,
AgentNode,
NodeResult,
WrapNodeRunHandler,
)
@dataclass
class NodeLogger(AbstractCapability[Any]):
"""Logs each node that executes during a run."""
nodes: list[str] = field(default_factory=list)
async def wrap_node_run(
self, ctx: RunContext[Any], *, node: AgentNode[Any], handler: WrapNodeRunHandler[Any]
) -> NodeResult[Any]:
self.nodes.append(type(node).__name__)
return await handler(node)
logger = NodeLogger()
agent = Agent('openai:gpt-5.2', capabilities=[logger])
agent.run_sync('hello')
print(logger.nodes)
#> ['UserPromptNode', 'ModelRequestNode', 'CallToolsNode']You can also use wrap_node_run to modify graph progression — for example, limiting the number of model requests per run:
from dataclasses import dataclass
from typing import Any
from pydantic_graph import End
from pydantic_ai import ModelRequestNode, RunContext
from pydantic_ai.capabilities import AbstractCapability, AgentNode, NodeResult, WrapNodeRunHandler
from pydantic_ai.result import FinalResult
@dataclass
class MaxModelRequests(AbstractCapability[Any]):
"""Limits the number of model requests per run by ending early."""
max_requests: int = 5
count: int = 0
async def for_run(self, ctx: RunContext[Any]) -> 'MaxModelRequests':
return MaxModelRequests(max_requests=self.max_requests) # fresh per run
async def wrap_node_run(
self, ctx: RunContext[Any], *, node: AgentNode[Any], handler: WrapNodeRunHandler[Any]
) -> NodeResult[Any]:
if isinstance(node, ModelRequestNode):
self.count += 1
if self.count > self.max_requests:
return End(FinalResult(output='Max model requests reached'))
return await handler(node)See Iterating Over an Agent's Graph for more about the agent graph and its node types.
| Hook | Signature | Purpose |
|---|---|---|
[before_model_request][pydantic_ai.capabilities.AbstractCapability.before_model_request] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], request_context: [ModelRequestContext][pydantic_ai.models.ModelRequestContext]) -> [ModelRequestContext][pydantic_ai.models.ModelRequestContext] |
Modify messages, settings, parameters, or model before the model call |
[after_model_request][pydantic_ai.capabilities.AbstractCapability.after_model_request] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, request_context: [ModelRequestContext][pydantic_ai.models.ModelRequestContext], response: [ModelResponse][pydantic_ai.messages.ModelResponse]) -> [ModelResponse][pydantic_ai.messages.ModelResponse] |
Modify the model's response |
[wrap_model_request][pydantic_ai.capabilities.AbstractCapability.wrap_model_request] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, request_context: [ModelRequestContext][pydantic_ai.models.ModelRequestContext], handler: [WrapModelRequestHandler][pydantic_ai.capabilities.WrapModelRequestHandler]) -> [ModelResponse][pydantic_ai.messages.ModelResponse] |
Wrap the model call |
[on_model_request_error][pydantic_ai.capabilities.AbstractCapability.on_model_request_error] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, request_context: [ModelRequestContext][pydantic_ai.models.ModelRequestContext], error: Exception) -> [ModelResponse][pydantic_ai.messages.ModelResponse] |
Handle model request errors (see error hooks) |
[ModelRequestContext][pydantic_ai.models.ModelRequestContext] bundles model, messages, model_settings, and model_request_parameters into a single object, making the signature future-proof. To swap the model for a given request, set request_context.model to a different [Model][pydantic_ai.models.Model] instance.
To skip the model call entirely and provide a replacement response, raise [SkipModelRequest(response)][pydantic_ai.exceptions.SkipModelRequest] from before_model_request or wrap_model_request.
Tool processing has two phases: validation (parsing and validating the model's JSON arguments against the tool's schema) and execution (running the tool function). Each phase has its own hooks.
All tool hooks receive a tool_def parameter with the [ToolDefinition][pydantic_ai.tools.ToolDefinition].
Validation hooks — args is the raw str | dict[str, Any] from the model before validation, or the validated dict[str, Any] after:
| Hook | Signature | Purpose |
|---|---|---|
[before_tool_validate][pydantic_ai.capabilities.AbstractCapability.before_tool_validate] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, call: [ToolCallPart][pydantic_ai.messages.ToolCallPart], tool_def: [ToolDefinition][pydantic_ai.tools.ToolDefinition], args: [RawToolArgs][pydantic_ai.capabilities.RawToolArgs]) -> [RawToolArgs][pydantic_ai.capabilities.RawToolArgs] |
Modify raw args before validation (e.g. JSON repair) |
[after_tool_validate][pydantic_ai.capabilities.AbstractCapability.after_tool_validate] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, call: [ToolCallPart][pydantic_ai.messages.ToolCallPart], tool_def: [ToolDefinition][pydantic_ai.tools.ToolDefinition], args: [ValidatedToolArgs][pydantic_ai.capabilities.ValidatedToolArgs]) -> [ValidatedToolArgs][pydantic_ai.capabilities.ValidatedToolArgs] |
Modify validated args |
[wrap_tool_validate][pydantic_ai.capabilities.AbstractCapability.wrap_tool_validate] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, call: [ToolCallPart][pydantic_ai.messages.ToolCallPart], tool_def: [ToolDefinition][pydantic_ai.tools.ToolDefinition], args: [RawToolArgs][pydantic_ai.capabilities.RawToolArgs], handler: [WrapToolValidateHandler][pydantic_ai.capabilities.WrapToolValidateHandler]) -> [ValidatedToolArgs][pydantic_ai.capabilities.ValidatedToolArgs] |
Wrap the validation step |
[on_tool_validate_error][pydantic_ai.capabilities.AbstractCapability.on_tool_validate_error] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, call: [ToolCallPart][pydantic_ai.messages.ToolCallPart], tool_def: [ToolDefinition][pydantic_ai.tools.ToolDefinition], args: [RawToolArgs][pydantic_ai.capabilities.RawToolArgs], error: Exception) -> [ValidatedToolArgs][pydantic_ai.capabilities.ValidatedToolArgs] |
Handle validation errors (see error hooks) |
To skip validation and provide pre-validated args, raise [SkipToolValidation(args)][pydantic_ai.exceptions.SkipToolValidation] from before_tool_validate or wrap_tool_validate.
Execution hooks — args is always the validated dict[str, Any]:
| Hook | Signature | Purpose |
|---|---|---|
[before_tool_execute][pydantic_ai.capabilities.AbstractCapability.before_tool_execute] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, call: [ToolCallPart][pydantic_ai.messages.ToolCallPart], tool_def: [ToolDefinition][pydantic_ai.tools.ToolDefinition], args: [ValidatedToolArgs][pydantic_ai.capabilities.ValidatedToolArgs]) -> [ValidatedToolArgs][pydantic_ai.capabilities.ValidatedToolArgs] |
Modify args before execution |
[after_tool_execute][pydantic_ai.capabilities.AbstractCapability.after_tool_execute] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, call: [ToolCallPart][pydantic_ai.messages.ToolCallPart], tool_def: [ToolDefinition][pydantic_ai.tools.ToolDefinition], args: [ValidatedToolArgs][pydantic_ai.capabilities.ValidatedToolArgs], result: Any) -> Any |
Modify execution result |
[wrap_tool_execute][pydantic_ai.capabilities.AbstractCapability.wrap_tool_execute] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, call: [ToolCallPart][pydantic_ai.messages.ToolCallPart], tool_def: [ToolDefinition][pydantic_ai.tools.ToolDefinition], args: [ValidatedToolArgs][pydantic_ai.capabilities.ValidatedToolArgs], handler: [WrapToolExecuteHandler][pydantic_ai.capabilities.WrapToolExecuteHandler]) -> Any |
Wrap execution |
[on_tool_execute_error][pydantic_ai.capabilities.AbstractCapability.on_tool_execute_error] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, call: [ToolCallPart][pydantic_ai.messages.ToolCallPart], tool_def: [ToolDefinition][pydantic_ai.tools.ToolDefinition], args: [ValidatedToolArgs][pydantic_ai.capabilities.ValidatedToolArgs], error: Exception) -> Any |
Handle execution errors (see error hooks) |
To skip execution and provide a replacement result, raise [SkipToolExecution(result)][pydantic_ai.exceptions.SkipToolExecution] from before_tool_execute or wrap_tool_execute.
Capabilities can filter or modify which tool definitions the model sees on each step via [prepare_tools][pydantic_ai.capabilities.AbstractCapability.prepare_tools]. This controls tool visibility, not execution — use execution hooks for that.
from dataclasses import dataclass
from typing import Any
from pydantic_ai import Agent, RunContext
from pydantic_ai.capabilities import AbstractCapability
from pydantic_ai.tools import ToolDefinition
@dataclass
class HideDangerousTools(AbstractCapability[Any]):
"""Hides tools matching certain name prefixes from the model."""
hidden_prefixes: tuple[str, ...] = ('delete_', 'drop_')
async def prepare_tools(
self, ctx: RunContext[Any], tool_defs: list[ToolDefinition]
) -> list[ToolDefinition]:
return [td for td in tool_defs if not any(td.name.startswith(p) for p in self.hidden_prefixes)]
agent = Agent('openai:gpt-5.2', capabilities=[HideDangerousTools()])
@agent.tool_plain
def delete_file(path: str) -> str:
"""Delete a file."""
return f'deleted {path}'
@agent.tool_plain
def read_file(path: str) -> str:
"""Read a file."""
return f'contents of {path}'
result = agent.run_sync('hello')
# The model only sees `read_file`, not `delete_file`The list includes all tool kinds (function, output, unapproved) — use tool_def.kind to distinguish. This hook runs after the agent-level [prepare_tools][pydantic_ai.tools.ToolsPrepareFunc]. For simple cases, the built-in [PrepareTools][pydantic_ai.capabilities.PrepareTools] capability wraps a callable without needing a custom subclass.
For runs with event streaming ([run_stream_events][pydantic_ai.agent.AbstractAgent.run_stream_events], [event_stream_handler][pydantic_ai.agent.Agent.init], UI event streams), capabilities can observe or transform the event stream:
| Hook | Signature | Purpose |
|---|---|---|
[wrap_run_event_stream][pydantic_ai.capabilities.AbstractCapability.wrap_run_event_stream] |
(ctx: [RunContext][pydantic_ai.tools.RunContext], *, stream: AsyncIterable[[AgentStreamEvent][pydantic_ai.messages.AgentStreamEvent]]) -> AsyncIterable[[AgentStreamEvent][pydantic_ai.messages.AgentStreamEvent]] |
Observe, filter, or transform streamed events |
from collections.abc import AsyncIterable
from dataclasses import dataclass
from typing import Any
from pydantic_ai import RunContext
from pydantic_ai.capabilities import AbstractCapability
from pydantic_ai.messages import (
AgentStreamEvent,
FunctionToolCallEvent,
FunctionToolResultEvent,
PartStartEvent,
TextPart,
)
@dataclass
class StreamAuditor(AbstractCapability[Any]):
"""Logs tool calls and text output during streamed runs."""
async def wrap_run_event_stream(
self,
ctx: RunContext[Any],
*,
stream: AsyncIterable[AgentStreamEvent],
) -> AsyncIterable[AgentStreamEvent]:
async for event in stream:
if isinstance(event, FunctionToolCallEvent):
print(f'Tool called: {event.part.tool_name}')
elif isinstance(event, FunctionToolResultEvent):
print(f'Tool result: {event.tool_return.content!r}')
elif isinstance(event, PartStartEvent) and isinstance(event.part, TextPart):
print(f'Text: {event.part.content!r}')
yield eventFor building web UIs that transform streamed events into protocol-specific formats (like SSE), see the UI event streams documentation and the [UIEventStream][pydantic_ai.ui.UIEventStream] base class.
Each lifecycle point has an on_*_error hook — the error counterpart to after_*. While after_* hooks fire on success, on_*_error hooks fire on failure (after wrap_* has had its chance to recover):
before_X → wrap_X(handler)
├─ success ─────────→ after_X (modify result)
└─ failure → on_X_error
├─ re-raise ──→ (error propagates, after_X not called)
└─ recover ───→ after_X (modify recovered result)
Error hooks use raise-to-propagate, return-to-recover semantics:
- Raise the original error — propagates the error unchanged (default)
- Raise a different exception — transforms the error
- Return a result — suppresses the error and uses the returned value
| Hook | Fires when | Recovery type |
|---|---|---|
[on_run_error][pydantic_ai.capabilities.AbstractCapability.on_run_error] |
Agent run fails | Return [AgentRunResult][pydantic_ai.run.AgentRunResult] |
[on_node_run_error][pydantic_ai.capabilities.AbstractCapability.on_node_run_error] |
Graph node fails | Return next node or [End][pydantic_graph.nodes.End] |
[on_model_request_error][pydantic_ai.capabilities.AbstractCapability.on_model_request_error] |
Model request fails | Return [ModelResponse][pydantic_ai.messages.ModelResponse] |
[on_tool_validate_error][pydantic_ai.capabilities.AbstractCapability.on_tool_validate_error] |
Tool validation fails | Return validated args dict |
[on_tool_execute_error][pydantic_ai.capabilities.AbstractCapability.on_tool_execute_error] |
Tool execution fails | Return any tool result |
from dataclasses import dataclass, field
from typing import Any
from pydantic_ai import RunContext
from pydantic_ai.capabilities import AbstractCapability
from pydantic_ai.messages import ModelResponse, TextPart
from pydantic_ai.models import ModelRequestContext
@dataclass
class ErrorLogger(AbstractCapability[Any]):
"""Logs all errors that occur during agent runs."""
errors: list[str] = field(default_factory=list)
async def on_model_request_error(
self, ctx: RunContext[Any], *, request_context: ModelRequestContext, error: Exception
) -> ModelResponse:
self.errors.append(f'Model error: {error}')
# Return a fallback response to recover
return ModelResponse(parts=[TextPart(content='Service temporarily unavailable.')])
async def on_tool_execute_error(
self, ctx: RunContext[Any], *, call: Any, tool_def: Any, args: dict[str, Any], error: Exception
) -> Any:
self.errors.append(f'Tool {call.tool_name} failed: {error}')
raise error # Re-raise to let the normal retry flow handle it[WrapperCapability][pydantic_ai.capabilities.WrapperCapability] wraps another capability and delegates all methods to it — similar to [WrapperToolset][pydantic_ai.toolsets.WrapperToolset] for toolsets. Subclass it to override specific methods while delegating the rest:
from dataclasses import dataclass
from typing import Any
from pydantic_ai import RunContext
from pydantic_ai.capabilities import WrapperCapability
from pydantic_ai.models import ModelRequestContext
@dataclass
class AuditedCapability(WrapperCapability[Any]):
"""Wraps any capability and logs its model requests."""
async def before_model_request(
self, ctx: RunContext[Any], request_context: ModelRequestContext
) -> ModelRequestContext:
print(f'Request from {type(self.wrapped).__name__}')
return await super().before_model_request(ctx, request_context)The built-in [PrefixTools][pydantic_ai.capabilities.PrefixTools] is an example of a WrapperCapability — it wraps another capability and prefixes its tool names.
By default, a capability instance is shared across all runs of an agent. If your capability accumulates mutable state that should not leak between runs, override [for_run][pydantic_ai.capabilities.AbstractCapability.for_run] to return a fresh instance:
from dataclasses import dataclass
from typing import Any
from pydantic_ai import Agent, RunContext
from pydantic_ai.capabilities import AbstractCapability
from pydantic_ai.models import ModelRequestContext
@dataclass
class RequestCounter(AbstractCapability[Any]):
"""Counts model requests per run."""
count: int = 0
async def for_run(self, ctx: RunContext[Any]) -> 'RequestCounter':
return RequestCounter() # fresh instance for each run
async def before_model_request(
self, ctx: RunContext[Any], request_context: ModelRequestContext
) -> ModelRequestContext:
self.count += 1
return request_context
counter = RequestCounter()
agent = Agent('openai:gpt-5.2', capabilities=[counter])
# The shared counter stays at 0 because for_run returns a fresh instance
agent.run_sync('first run')
agent.run_sync('second run')
print(counter.count)
#> 0When multiple capabilities are passed to an agent, they are composed into a single [CombinedCapability][pydantic_ai.capabilities.CombinedCapability]:
- Configuration is merged: instructions concatenate, model settings merge additively (later capabilities override earlier ones), toolsets combine, builtin tools collect.
before_*hooks fire in capability order:cap1 → cap2 → cap3.after_*hooks fire in reverse order:cap3 → cap2 → cap1.wrap_*hooks nest as middleware:cap1wrapscap2wrapscap3wraps the actual operation. The first capability is the outermost layer.
This means the first capability in the list has the first and last say on the operation — it sees the original input in its wrap_* before handler, and it sees the final output after handler returns.
A guardrail is a capability that intercepts model requests or responses to enforce safety rules. Here's one that scans model responses for potential PII and redacts it:
import re
from dataclasses import dataclass
from typing import Any
from pydantic_ai import Agent, RunContext
from pydantic_ai.capabilities import AbstractCapability
from pydantic_ai.messages import ModelResponse, TextPart
from pydantic_ai.models import ModelRequestContext
@dataclass
class PIIRedactionGuardrail(AbstractCapability[Any]):
"""Redacts email addresses and phone numbers from model responses."""
async def after_model_request(
self,
ctx: RunContext[Any],
*,
request_context: ModelRequestContext,
response: ModelResponse,
) -> ModelResponse:
for part in response.parts:
if isinstance(part, TextPart):
# Redact email addresses
part.content = re.sub(
r'[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}',
'[EMAIL REDACTED]',
part.content,
)
# Redact phone numbers (simple US pattern)
part.content = re.sub(
r'\b\d{3}[-.]?\d{3}[-.]?\d{4}\b',
'[PHONE REDACTED]',
part.content,
)
return response
agent = Agent('openai:gpt-5.2', capabilities=[PIIRedactionGuardrail()])
result = agent.run_sync("What's Jane's contact info?")
print(result.output)
#> You can reach Jane at [EMAIL REDACTED] or [PHONE REDACTED].The wrap_* pattern is useful when you need to observe or time both the input and output of an operation. Here's a capability that logs every model request and tool call:
from dataclasses import dataclass
from typing import Any
from pydantic_ai import Agent, RunContext
from pydantic_ai.capabilities import (
AbstractCapability,
WrapModelRequestHandler,
WrapToolExecuteHandler,
)
from pydantic_ai.messages import ModelResponse, ToolCallPart
from pydantic_ai.models import ModelRequestContext
from pydantic_ai.tools import ToolDefinition
@dataclass
class VerboseLogging(AbstractCapability[Any]):
"""Logs model requests and tool executions."""
async def wrap_model_request(
self,
ctx: RunContext[Any],
*,
request_context: ModelRequestContext,
handler: WrapModelRequestHandler,
) -> ModelResponse:
print(f' Model request (step {ctx.run_step}, {len(request_context.messages)} messages)')
#> Model request (step 1, 1 messages)
response = await handler(request_context)
print(f' Model response: {len(response.parts)} parts')
#> Model response: 1 parts
return response
async def wrap_tool_execute(
self,
ctx: RunContext[Any],
*,
call: ToolCallPart,
tool_def: ToolDefinition,
args: dict[str, Any],
handler: WrapToolExecuteHandler,
) -> Any:
print(f' Tool call: {call.tool_name}({args})')
result = await handler(args)
print(f' Tool result: {result!r}')
return result
agent = Agent('openai:gpt-5.2', capabilities=[VerboseLogging()])
result = agent.run_sync('hello')
print(f'Output: {result.output}')
#> Output: Hello! How can I help you today?Capabilities are the recommended way for third-party packages to extend Pydantic AI, since they can bundle tools with hooks, instructions, and model settings. See Extensibility for the full ecosystem, including third-party toolsets that can also be wrapped as capabilities.
To add your package to this page, open a pull request.
To make a custom capability usable in agent specs, it needs a [get_serialization_name][pydantic_ai.capabilities.AbstractCapability.get_serialization_name] (defaults to the class name) and a constructor that accepts serializable arguments. The default [from_spec][pydantic_ai.capabilities.AbstractCapability.from_spec] implementation calls cls(*args, **kwargs), so for simple dataclasses no override is needed:
from dataclasses import dataclass
from typing import Any
from pydantic_ai import Agent
from pydantic_ai.agent.spec import AgentSpec
from pydantic_ai.capabilities import AbstractCapability
@dataclass
class RateLimit(AbstractCapability[Any]):
"""Limits requests per minute."""
rpm: int = 60
# In YAML: `- RateLimit: {rpm: 30}`
# In Python:
agent = Agent.from_spec(
AgentSpec(model='test', capabilities=[{'RateLimit': {'rpm': 30}}]),
custom_capability_types=[RateLimit],
)Users register custom capability types via the custom_capability_types parameter on [Agent.from_spec][pydantic_ai.agent.Agent.from_spec] or [Agent.from_file][pydantic_ai.agent.Agent.from_file].
Override [from_spec][pydantic_ai.capabilities.AbstractCapability.from_spec] when the constructor takes types that can't be represented in YAML/JSON. The spec fields should mirror the dataclass fields, but with serializable types:
from collections.abc import Callable
from dataclasses import dataclass, field
from typing import Any
from pydantic_ai import RunContext
from pydantic_ai.capabilities import AbstractCapability
from pydantic_ai.tools import ToolDefinition
@dataclass
class ConditionalTools(AbstractCapability[Any]):
"""Hides tools unless a condition is met."""
condition: Callable[[RunContext[Any]], bool] # not serializable
hidden_tools: list[str] = field(default_factory=list)
@classmethod
def from_spec(cls, hidden_tools: list[str]) -> 'ConditionalTools[Any]':
# In the spec, there's no condition callable — always hide
return cls(condition=lambda ctx: True, hidden_tools=hidden_tools)
async def prepare_tools(
self, ctx: RunContext[Any], tool_defs: list[ToolDefinition]
) -> list[ToolDefinition]:
if self.condition(ctx):
return [td for td in tool_defs if td.name not in self.hidden_tools]
return tool_defsIn YAML this would be - ConditionalTools: {hidden_tools: [dangerous_tool]}. In Python code, the full constructor is available: ConditionalTools(condition=my_check, hidden_tools=['dangerous_tool']).
See Extensibility for packaging conventions and the broader extension ecosystem.