-
Notifications
You must be signed in to change notification settings - Fork 2k
Add tool_choice setting
#3611
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add tool_choice setting
#3611
Changes from 120 commits
96bb73d
dcf932b
a93f8ba
c554d19
b02e760
1539e67
c844e85
e9b44fb
6395d82
e990265
01bac47
6b4eb77
747143b
190c6be
666a8e9
e534d28
57cbb68
412d613
54e8114
357071c
0f31a4b
051e8a7
c9f90a2
9352163
c858d38
374717a
d233784
d711ea5
9b2e666
3f4ee75
e53cc05
0c24a8c
41073a8
f7f45ce
480f642
a041a3c
6b5bd17
0b770a5
bf9c07d
352d577
0336a0a
916b008
44e0140
9d18baf
68a9abd
ccbef9d
578149f
7e04a89
12e287d
6341b8c
70619df
17a81fc
d890676
64bfd5f
a4e87bc
7628221
56cb4db
6a8dbce
b9d3480
4276a2a
3ab631d
74aedad
65c388e
212a684
2f18c91
a2abc53
385c2c2
b6ee404
9ddc7ac
876f273
6c71919
131bba1
75a48ce
5bcdcd6
fda6d47
0e9b0e4
42dcb4d
85333b0
174918d
4978acd
5df25b8
fa44b25
56c79ab
c518cd9
b005035
b76bfc1
a1372e4
8ba2555
ad09221
a78e041
598e95d
010f1b8
0e22b9b
30c8163
b6af413
abb388b
dac929e
298b117
7a5dad1
8f4d9fc
a5fbe1a
9f182d7
a6caac9
67b3135
88484b5
3806a7b
666a310
a4dd3de
c892ddb
42a93de
34aacb7
3ee953a
00e8d25
e87b446
0eaa50d
cd5f4ac
a9ad30e
1289fb3
659e0d7
7004f7d
929866e
cd16768
d9d74f0
169929d
6866f84
422fa15
84b10a2
23e8bff
f2ed614
3e61632
fa43c89
bc93967
29ae029
4aac19d
3890e28
7e0824a
0aa6dff
72d3bb5
87c4ae9
d6f7c87
e277650
680fab8
173aa32
1fd0bcb
a8b6636
3adb3ab
215ef75
7d3d3e8
22a30ad
ff48ef6
85261fe
457ca54
bba4d65
0ae5b8e
32d1a59
b1ffee3
181fa8d
2fa237f
e1add20
78ebfff
4898065
d235a2f
f47ded9
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -5,4 +5,4 @@ | |
| inherited_members: true | ||
| members: | ||
| - ModelSettings | ||
| - UsageLimits | ||
| - ToolOrOutput | ||
|
devin-ai-integration[bot] marked this conversation as resolved.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This replaces members:
- ModelSettings
- ToolOrOutput
- UsageLimits(An earlier auto-review already flagged this — just confirming it's still unfixed.) |
||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Note to self: thoroughly review docs after code and tests are OK |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -351,6 +351,72 @@ You can use `prepare_tools` to: | |
|
|
||
| If both per-tool `prepare` and agent-wide `prepare_tools` are used, the per-tool `prepare` is applied first to each tool, and then `prepare_tools` is called with the resulting list of tool definitions. | ||
|
|
||
| ## Tool Choice {#tool-choice} | ||
|
|
||
| The `tool_choice` setting in [`ModelSettings`][pydantic_ai.settings.ModelSettings] controls which tools the model can use during a request. This is useful for disabling tools, forcing tool use, or restricting which tools are available. | ||
|
|
||
| !!! warning "Per-Run Setting" | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @DouweM Per your unresolved comment — this docs section needs a thorough review before merge. A few things I noticed:
|
||
| This feature is a stepping-stone to expanded functionality, like forcing an agent to call tools at specific steps. | ||
| Used directly, the `tool_choice` setting applies to **every model request** in an agent run: | ||
|
|
||
| - `'required'` forces a tool call at every step, not just the first | ||
| - `['tool_a']` restricts to that tool for the entire run, preventing completion if the agent needs output tools | ||
|
|
||
| If you want to use it for per-request control, use [direct model requests](direct.md) or the `prepare_tools` function above. | ||
|
|
||
| Pydantic AI distinguishes between **[function tools](tools.md)** (tools you register via `@agent.tool`, [toolsets](toolsets.md), or [MCP](mcp/client.md)), and **output tools** (internal tools used for [structured output](output.md#tool-output)). | ||
|
|
||
| ### Options | ||
|
|
||
| | Value | Description | | ||
| |-------|-------------| | ||
| | `'auto'` (default) | Model decides whether to use tools. All tools available. | | ||
| | `'none'` | Disable function tools. Model can respond with text or use output tools. | | ||
| | `'required'` | Force the model to use a tool. All tools remain available. | | ||
|
dsfaccini marked this conversation as resolved.
Outdated
|
||
| | `['tool_a', ...]` | Restrict to specific tools by name (can include output tool names). | | ||
|
dsfaccini marked this conversation as resolved.
Outdated
|
||
| | [`ToolOrOutput`][pydantic_ai.settings.ToolOrOutput]`(function_tools=['...'])` | Restrict function tools while auto-including all output tools. | | ||
|
|
||
| ### Example | ||
|
|
||
| ```python | ||
| from pydantic_ai import Agent | ||
| from pydantic_ai.models.test import TestModel | ||
| from pydantic_ai.settings import ToolOrOutput | ||
|
|
||
| agent = Agent(TestModel()) | ||
|
Comment on lines
+343
to
+366
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. these docs will change as seen here https://github.com/dsfaccini/pydantic-ai/pull/2/files once |
||
|
|
||
|
|
||
| @agent.tool_plain | ||
| def get_weather(city: str) -> str: | ||
| return f'Sunny in {city}' | ||
|
|
||
|
|
||
| @agent.tool_plain | ||
| def get_time(city: str) -> str: | ||
| return f'12:00 in {city}' | ||
|
|
||
|
|
||
| # Pass tool_choice via model_settings | ||
| result = agent.run_sync('Hello', model_settings={'tool_choice': 'none'}) | ||
|
|
||
| # Use ToolOrOutput to restrict to specific function tools while allowing output | ||
| result = agent.run_sync( | ||
| 'Hello', model_settings={'tool_choice': ToolOrOutput(function_tools=['get_weather'])} | ||
| ) | ||
| ``` | ||
|
|
||
| ### Provider Support | ||
|
|
||
| All providers support `'auto'` and `'none'`. Key differences for other options: | ||
|
|
||
| | Provider | `'required'` | Specific tools | Notes | | ||
| |----------|:------------:|:--------------:|-------| | ||
| | OpenAI | ✓ | ✓ | Full support | | ||
| | Anthropic | ⚠️ | ⚠️ | Not supported with thinking enabled | | ||
| | Google | ✓ | ✓ | | | ||
| | Bedrock | ✓ | Single only | Multiple tools fall back to 'any' mode | | ||
| | Groq/HuggingFace | ✓ | Single only | Multiple tools fall back to 'required' mode | | ||
|
|
||
|
dsfaccini marked this conversation as resolved.
|
||
| ## Tool Execution and Retries {#tool-retries} | ||
|
|
||
| When a tool is executed, its arguments (provided by the LLM) are first validated against the function's signature using Pydantic (with optional [validation context](output.md#validation-context)). If validation fails (e.g., due to incorrect types or missing required arguments), a `ValidationError` is raised, and the framework automatically generates a [`RetryPromptPart`][pydantic_ai.messages.RetryPromptPart] containing the validation details. This prompt is sent back to the LLM, informing it of the error and allowing it to correct the parameters and retry the tool call. | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -596,6 +596,21 @@ async def main(): | |
| if infer_name and self.name is None: | ||
| self._infer_name(inspect.currentframe()) | ||
|
|
||
| # Validate tool_choice - 'required' and list[str] would prevent the agent from ever completing | ||
| # because they exclude output tools. These settings are only valid for direct model requests. | ||
| # TODO(prepare_model_settings): This validation remains correct for static model_settings. | ||
| # The hook CAN return 'required' or list[str] for per-step control (e.g., force tool on | ||
| # step 1, then allow completion on step 2+). The hook bypasses this validation because | ||
| # it applies dynamically per-request, not statically for the entire run. | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think the comment is unnecessary because the error already explains why we need this check :) |
||
| if model_settings: | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. we need this not just for model settings passed into this method, but also those set on the agent directly. so I think this should be in the agent graph where we combine the model settings before passing them into the model. |
||
| tool_choice = model_settings.get('tool_choice') | ||
| if tool_choice == 'required' or isinstance(tool_choice, list): | ||
| raise exceptions.UserError( | ||
| f'tool_choice={tool_choice!r} is not supported in agent.run() because it prevents ' | ||
| f'the agent from producing a final response. Use ToolOrOutput to combine specific ' | ||
| f'tools with output capability, or use model.request() for direct model calls.' | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||
| ) | ||
|
|
||
| model_used = self._get_model(model) | ||
| del model | ||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,139 @@ | ||
| from typing import Literal | ||
|
|
||
| from typing_extensions import assert_never | ||
|
|
||
| from pydantic_ai.exceptions import UserError | ||
| from pydantic_ai.models import ModelRequestParameters | ||
| from pydantic_ai.settings import ModelSettings, ToolOrOutput | ||
|
|
||
| _AutoOrRequired = Literal['auto', 'required'] | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why a separate type var here? Should we use it instead of
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I thought you asked for this #3611 (comment)
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I meant the entire |
||
| ResolvedToolChoice = Literal['none', 'auto', 'required'] | tuple[_AutoOrRequired, list[str]] | ||
|
|
||
|
|
||
| def resolve_tool_choice( # noqa: C901 | ||
| model_settings: ModelSettings | None, | ||
| model_request_parameters: ModelRequestParameters, | ||
| ) -> ResolvedToolChoice: | ||
| """Resolve user-facing tool_choice into a canonical form for providers. | ||
|
|
||
| Pydantic AI distinguishes between function tools (e.g. user-registered via @agent.tool) | ||
|
dsfaccini marked this conversation as resolved.
|
||
| and output tools (framework-internal for structured output). The user-facing | ||
| `tool_choice` setting controls function tools only - this function resolves that | ||
| into a canonical form that providers can use, incorporating output tools as needed. | ||
|
|
||
| Args: | ||
| model_settings: Optional settings containing the tool_choice value. | ||
| model_request_parameters: Parameters describing available tools and output configuration. | ||
|
|
||
| Returns: | ||
| A canonical tool_choice value for providers: | ||
|
|
||
| - `'none'`: No tools should be called. Only valid when direct output (text/image) is allowed. | ||
| - `'auto'`: Model chooses whether to use tools. Direct output is allowed. | ||
| - `'required'`: Model must use a tool. Direct output is not allowed. | ||
| - `(tool_names, 'auto')`: Only these tools are available, direct output is allowed. | ||
| - `(tool_names, 'required')`: Only these tools are available, must use one. | ||
|
dsfaccini marked this conversation as resolved.
Outdated
|
||
|
|
||
| Raises: | ||
| UserError: If tool_choice is incompatible with the available tools or output configuration. | ||
|
|
||
| Input behavior: | ||
|
dsfaccini marked this conversation as resolved.
|
||
|
|
||
| - `None` / `'auto'`: Returns `'auto'` if direct output allowed, else `'required'`. | ||
| - `'none'` / `[]`: Disables function tools. If output tools exist, returns them with | ||
| appropriate mode. Otherwise returns `'none'`. | ||
| - `'required'`: Requires function tool use. Raises if no function tools are defined. | ||
| - `list[str]`: Restricts to specified tools with `'required'` mode. Validates tool names. | ||
| - `ToolOrOutput`: Combines specified function tools with all output tools. | ||
| Returns `'auto'` mode if direct output is allowed, otherwise `'required'`. | ||
| """ | ||
| function_tool_choice = (model_settings or {}).get('tool_choice') | ||
|
|
||
| allow_direct_output = model_request_parameters.allow_text_output or model_request_parameters.allow_image_output | ||
|
|
||
| available_tools = set(model_request_parameters.tool_defs.keys()) | ||
|
|
||
| def _invalid_tools(chosen_tool_names: set[str], available_tools: set[str], *, available_label: str) -> None: | ||
| invalid = chosen_tool_names - available_tools | ||
| if invalid: | ||
| raise UserError( | ||
| f'Invalid tool names in `tool_choice`: {invalid}. {available_label}: {available_tools or "none"}' | ||
| ) | ||
|
|
||
| # Default / auto | ||
| if function_tool_choice in (None, 'auto'): | ||
| return 'auto' if allow_direct_output else 'required' | ||
|
|
||
| # none / []: disable function tools, but output tools may still exist | ||
| elif function_tool_choice in ('none', []): | ||
| output_tool_names = [t.name for t in model_request_parameters.output_tools] | ||
|
|
||
| if output_tool_names: | ||
| if allow_direct_output: | ||
| mode: _AutoOrRequired = 'auto' | ||
| elif model_request_parameters.function_tools: | ||
| mode = 'required' | ||
| else: | ||
| return 'required' # only output tools exist and direct output isn't allowed | ||
|
|
||
| return (mode, output_tool_names) | ||
|
|
||
| if allow_direct_output: | ||
| return 'none' | ||
|
|
||
| # pragma: no cover | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The assert False, 'Either output_tools or allow_text_output/allow_image_output must be set' # pragma: no cover |
||
| assert False, 'Either output_tools or allow_text_output/allow_image_output must be set' | ||
|
|
||
| # required (only function tools allowed) | ||
| elif function_tool_choice == 'required': | ||
| if not model_request_parameters.function_tools: | ||
|
DouweM marked this conversation as resolved.
|
||
| raise UserError( | ||
| '`tool_choice` was set to "required", but no function tools are defined. ' | ||
| 'Please define function tools or change `tool_choice` to "auto" or "none".' | ||
| ) | ||
| return 'required' | ||
|
|
||
| # list[str]: required, restricted to these tools | ||
| elif isinstance(function_tool_choice, list): | ||
| # unique names; doesn't retain order, but that's ok https://github.com/pydantic/pydantic-ai/pull/3611#discussion_r2677595474 | ||
|
dsfaccini marked this conversation as resolved.
Outdated
|
||
| chosen_set = set(function_tool_choice) | ||
| # we'll only raise here if none of the chosen tools are valid https://github.com/pydantic/pydantic-ai/pull/3611#discussion_r2677602549 | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The comment on this line links to a GitHub discussion thread, which is fine for internal context during review but shouldn't be in the shipped code. Please remove the URL comment — the behavior is clear enough from the code and can be documented in the docstring if needed. |
||
| if chosen_set - available_tools == chosen_set: | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I very much prefer mine lol
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Per @DouweM's unresolved comment: |
||
| _invalid_tools(chosen_set, available_tools, available_label='Available tools') | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd prefer to inline the error and remove the helper, as we now have 2 levels of
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same comment as the one above — |
||
|
|
||
| if chosen_set == available_tools: | ||
| return 'required' | ||
|
|
||
| # tests require a deterministic order | ||
| return ('required', sorted(chosen_set)) | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it make sense to let the user specific a |
||
|
|
||
| # ToolOrOutput: specific function tools + all output tools or direct text/image output | ||
| elif isinstance(function_tool_choice, ToolOrOutput): | ||
| output_tool_names = [t.name for t in model_request_parameters.output_tools] | ||
|
|
||
| # stable order, unique | ||
|
dsfaccini marked this conversation as resolved.
Outdated
|
||
| if not function_tool_choice.function_tools: | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Would it make sense to move the "ToolOrOutput with 0 tools" scenario to the As it stands, the body of this branch is not consistent with that one, which may indicate an issue? (Unsure, haven't verified)
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Per @DouweM's unresolved comment: the |
||
| if output_tool_names: | ||
| return 'auto' if allow_direct_output else 'required' | ||
| return 'none' | ||
|
devin-ai-integration[bot] marked this conversation as resolved.
dsfaccini marked this conversation as resolved.
|
||
|
|
||
| chosen_function_set = set(function_tool_choice.function_tools) | ||
| all_function_tool_names = {t.name for t in model_request_parameters.function_tools} | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. it's confusing / inconsistent that
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Per @DouweM's unresolved comment: |
||
| # same as above - only raise if none are valid | ||
| if not chosen_function_set - all_function_tool_names == chosen_function_set: | ||
|
dsfaccini marked this conversation as resolved.
Outdated
|
||
| _invalid_tools( | ||
| chosen_function_set, | ||
| all_function_tool_names, | ||
| available_label='Available function tools', | ||
| ) | ||
|
dsfaccini marked this conversation as resolved.
Outdated
dsfaccini marked this conversation as resolved.
Outdated
|
||
|
|
||
| # tests require a deterministic order | ||
| allowed_tools = sorted([*chosen_function_set, *output_tool_names]) | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. juggling sets and lists and needing this to be sorted makes this all harder to follow than it needs to be... if the tests require a specific order, wouldn't that be the most appropriate place to turn the set into a |
||
| if set(allowed_tools) == available_tools: | ||
| return 'required' | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is incorrect if |
||
|
|
||
| # If direct output is allowed, use 'auto' mode to permit text/image responses | ||
| mode: _AutoOrRequired = 'auto' if allow_direct_output else 'required' | ||
| return (mode, allowed_tools) | ||
| else: | ||
| assert_never(function_tool_choice) | ||

Uh oh!
There was an error while loading. Please reload this page.