Skip to content
Open
Show file tree
Hide file tree
Changes from 120 commits
Commits
Show all changes
163 commits
Select commit Hold shift + click to select a range
96bb73d
rebase?
dsfaccini Dec 16, 2025
dcf932b
coverage
dsfaccini Dec 16, 2025
a93f8ba
Merge branch 'main' into tool-choice
dsfaccini Dec 16, 2025
c554d19
linting
dsfaccini Dec 16, 2025
b02e760
dedup bedrock
dsfaccini Dec 17, 2025
1539e67
make google readable
dsfaccini Dec 17, 2025
c844e85
dont set allowed_function_names if not present
dsfaccini Dec 17, 2025
e9b44fb
more readable mistral
dsfaccini Dec 17, 2025
6395d82
remove warning expectation
dsfaccini Dec 17, 2025
e990265
make hf readable
dsfaccini Dec 17, 2025
01bac47
rename method and readable anthropic
dsfaccini Dec 17, 2025
6b4eb77
make openai readable
dsfaccini Dec 17, 2025
747143b
Merge branch 'main' into tool-choice
dsfaccini Dec 17, 2025
190c6be
fix tests
dsfaccini Dec 17, 2025
666a8e9
Merge branch 'main' into tool-choice
dsfaccini Dec 17, 2025
e534d28
Merge branch 'main' into tool-choice
dsfaccini Dec 17, 2025
57cbb68
add error and comment
dsfaccini Dec 17, 2025
412d613
coverage
dsfaccini Dec 17, 2025
54e8114
refactor
DouweM Dec 17, 2025
357071c
ToolsPlusOutput logic
dsfaccini Dec 19, 2025
0f31a4b
undo all tests
dsfaccini Dec 19, 2025
051e8a7
tool_choice unit and mock tests
dsfaccini Dec 19, 2025
c9f90a2
test openai tool choice
dsfaccini Dec 19, 2025
9352163
test responses tool choice
dsfaccini Dec 19, 2025
c858d38
test anthropic tool choice
dsfaccini Dec 19, 2025
374717a
test google tool choice
dsfaccini Dec 19, 2025
d233784
format
dsfaccini Dec 19, 2025
d711ea5
test groq tool choice
dsfaccini Dec 19, 2025
9b2e666
test bedrock tool choice
dsfaccini Dec 21, 2025
3f4ee75
test huggingface tool choice
dsfaccini Dec 21, 2025
e53cc05
test mistral tool choice
dsfaccini Dec 21, 2025
0c24a8c
dirty equals signatures
dsfaccini Dec 21, 2025
41073a8
Merge branch 'main' into tool-choice
dsfaccini Dec 21, 2025
f7f45ce
fix function name
dsfaccini Dec 21, 2025
480f642
update docs
dsfaccini Dec 21, 2025
a041a3c
fix and deduplicate tests
dsfaccini Dec 21, 2025
6b5bd17
fix tests
dsfaccini Dec 22, 2025
0b770a5
irregular chars
dsfaccini Dec 22, 2025
bf9c07d
rerun mistral
dsfaccini Dec 22, 2025
352d577
rerun oair
dsfaccini Dec 22, 2025
0336a0a
rerun hf
dsfaccini Dec 22, 2025
916b008
fix smart quotes problem
dsfaccini Dec 22, 2025
44e0140
patch cassettes
dsfaccini Dec 22, 2025
9d18baf
fix claudemd
dsfaccini Dec 22, 2025
68a9abd
unit tests
dsfaccini Dec 22, 2025
ccbef9d
fix anthropic filtering behavior
dsfaccini Dec 22, 2025
578149f
rename tests
dsfaccini Dec 22, 2025
7e04a89
fix corrupt mistral cassettes
dsfaccini Dec 22, 2025
12e287d
don't send 'none' for mistral
dsfaccini Dec 22, 2025
6341b8c
update docstrings
dsfaccini Dec 22, 2025
70619df
fix tests
dsfaccini Dec 22, 2025
17a81fc
coverage
dsfaccini Dec 22, 2025
d890676
Merge branch 'main' into tool-choice
dsfaccini Dec 23, 2025
64bfd5f
type errors
dsfaccini Dec 23, 2025
a4e87bc
Merge branch 'main' into tool-choice
dsfaccini Dec 23, 2025
7628221
fix guards
dsfaccini Dec 23, 2025
56cb4db
Merge branch 'main' into tool-choice
dsfaccini Dec 23, 2025
6a8dbce
fix snapshots
dsfaccini Dec 23, 2025
b9d3480
merge upstream main
dsfaccini Dec 23, 2025
4276a2a
redo snapshot
dsfaccini Dec 23, 2025
3ab631d
model changes
dsfaccini Dec 23, 2025
74aedad
smallet tool choice docs - move to tools-advanced
dsfaccini Dec 26, 2025
65c388e
more links
dsfaccini Dec 26, 2025
212a684
Merge branch 'main' into tool-choice
dsfaccini Dec 27, 2025
2f18c91
changes requested
dsfaccini Jan 2, 2026
a2abc53
fix tests
dsfaccini Jan 2, 2026
385c2c2
Merge branch 'main' into tool-choice
dsfaccini Jan 2, 2026
b6ee404
change to parametrized
dsfaccini Jan 2, 2026
9ddc7ac
skip if flaky
dsfaccini Jan 5, 2026
876f273
refactor tests
dsfaccini Jan 6, 2026
6c71919
fix ci
dsfaccini Jan 6, 2026
131bba1
Merge branch 'main' into tool-choice
dsfaccini Jan 6, 2026
75a48ce
change region for cassettes
dsfaccini Jan 6, 2026
5bcdcd6
fix tests
dsfaccini Jan 6, 2026
fda6d47
restore model tests for easier review
dsfaccini Jan 6, 2026
0e9b0e4
remove unit tests
dsfaccini Jan 6, 2026
42dcb4d
warn only on user explicit choice
dsfaccini Jan 6, 2026
85333b0
coverage
dsfaccini Jan 6, 2026
174918d
coverage
dsfaccini Jan 6, 2026
4978acd
coverage
dsfaccini Jan 6, 2026
5df25b8
Merge branch 'main' into tool-choice
dsfaccini Jan 8, 2026
fa44b25
rename
dsfaccini Jan 8, 2026
56c79ab
adjust wording in docs
dsfaccini Jan 8, 2026
c518cd9
adjust wording in docs
dsfaccini Jan 8, 2026
b005035
Merge branch 'main' into tool-choice
dsfaccini Jan 9, 2026
b76bfc1
Fix indentation for MistralReferenceChunk check
dsfaccini Jan 9, 2026
a1372e4
Update pydantic_ai_slim/pydantic_ai/settings.py
dsfaccini Jan 10, 2026
8ba2555
rename validate/Validated to resolve/Resolved
dsfaccini Jan 10, 2026
ad09221
stacklevel and _invalid_tool_names
dsfaccini Jan 10, 2026
a78e041
URLs -> mkdocs cross-ref
dsfaccini Jan 10, 2026
598e95d
rename ToolOrOutput
dsfaccini Jan 10, 2026
010f1b8
rename validated_tool_choice -> resolved_... and centralize _AutoOrRe…
dsfaccini Jan 10, 2026
0e22b9b
reverse mode,tuple order
dsfaccini Jan 10, 2026
30c8163
change ifs to elifs
dsfaccini Jan 10, 2026
b6af413
rename to function_tool_choice
dsfaccini Jan 10, 2026
abb388b
sync docs wordings with logic
dsfaccini Jan 10, 2026
dac929e
remove comment
dsfaccini Jan 10, 2026
298b117
comment breaks caching
dsfaccini Jan 10, 2026
7a5dad1
remove order enforcement and change error criteria - update test expe…
dsfaccini Jan 10, 2026
8f4d9fc
adjust resolve_tool_choice docstring
dsfaccini Jan 10, 2026
a5fbe1a
apply updates to `chosen_function_set` and add sortings to fix tests
dsfaccini Jan 10, 2026
9f182d7
apply optimization https://github.com/pydantic/pydantic-ai/pull/3611/…
dsfaccini Jan 10, 2026
a6caac9
swap warning for error
dsfaccini Jan 10, 2026
67b3135
remove warning expectation
dsfaccini Jan 10, 2026
88484b5
google: remove text modality if image requested
dsfaccini Jan 12, 2026
3806a7b
update cases
dsfaccini Jan 12, 2026
666a310
rewrite type defs
dsfaccini Jan 12, 2026
a4dd3de
add docstring
dsfaccini Jan 12, 2026
c892ddb
add agent check + TODOs for the prepare hook to handle
dsfaccini Jan 12, 2026
42a93de
add explicit choice check and soft fallback
dsfaccini Jan 13, 2026
34aacb7
raise on incompatible explicit choice
dsfaccini Jan 13, 2026
3ee953a
add `is_compatible`
dsfaccini Jan 13, 2026
00e8d25
Merge branch 'main' into tool-choice
dsfaccini Jan 13, 2026
e87b446
linting
dsfaccini Jan 13, 2026
0eaa50d
fix tests
dsfaccini Jan 13, 2026
cd5f4ac
fix tests
dsfaccini Jan 13, 2026
a9ad30e
Merge branch 'main' into tool-choice
dsfaccini Jan 13, 2026
1289fb3
Merge branch 'main' into tool-choice
dsfaccini Jan 14, 2026
659e0d7
small refac
dsfaccini Jan 14, 2026
7004f7d
Merge branch 'main' into tool-choice
dsfaccini Jan 14, 2026
929866e
rename is_compatible to supports
dsfaccini Jan 14, 2026
cd16768
groq code reduction
dsfaccini Jan 14, 2026
d9d74f0
unstuck 3.10
dsfaccini Jan 15, 2026
169929d
set
dsfaccini Jan 15, 2026
6866f84
merge main
dsfaccini Jan 29, 2026
422fa15
Merge branch 'main' into tool-choice
dsfaccini Jan 29, 2026
84b10a2
fix type
dsfaccini Jan 29, 2026
23e8bff
replace unit+vcr with matrix test and update implementations
dsfaccini Jan 29, 2026
f2ed614
coverage
dsfaccini Jan 30, 2026
3e61632
Merge branch 'main' into tool-choice
dsfaccini Feb 2, 2026
fa43c89
fix bedrock validation exception when bboth tool choice and thinking
dsfaccini Feb 2, 2026
bc93967
Merge branch 'main' into tool-choice
dsfaccini Feb 3, 2026
29ae029
order doesnt matter
dsfaccini Feb 2, 2026
4aac19d
move check into agent graph
dsfaccini Feb 3, 2026
3890e28
coverage
dsfaccini Feb 3, 2026
7e0824a
coverage/parametrized
dsfaccini Feb 3, 2026
0aa6dff
too many pragmas
dsfaccini Feb 3, 2026
72d3bb5
address devin comments
dsfaccini Feb 4, 2026
87c4ae9
Merge branch 'main' into tool-choice
dsfaccini Feb 9, 2026
d6f7c87
address review notes
dsfaccini Feb 16, 2026
e277650
Merge branch 'main' into tool-choice
dsfaccini Feb 16, 2026
680fab8
Merge branch 'main' into tool-choice
dsfaccini Mar 3, 2026
173aa32
Merge branch 'main' into tool-choice
dsfaccini Mar 3, 2026
1fd0bcb
fix: resolve CI failures in HuggingFace tool_choice tests
dsfaccini Mar 4, 2026
a8b6636
Merge branch 'main' into tool-choice
dsfaccini Mar 5, 2026
3adb3ab
docs: clarify that _validate_tool_choice receives merged model_settings
dsfaccini Mar 5, 2026
215ef75
Merge branch 'main' into tool-choice
dsfaccini Apr 8, 2026
7d3d3e8
fix: update test_capabilities snapshot for tool_choice schema
dsfaccini Apr 8, 2026
22a30ad
fix: remove unused _get_tools method superseded by _prepare_tools_and…
dsfaccini Apr 8, 2026
ff48ef6
fix: _support_tool_forcing checks both anthropic_thinking and unified…
dsfaccini Apr 8, 2026
85261fe
fix: clean up docstring TODOs and broken links in tool_choice docs
dsfaccini Apr 8, 2026
457ca54
Merge branch 'main' into tool-choice
dsfaccini Apr 9, 2026
bba4d65
fix: remove pyright ignore, redundant guard, and minor cleanups
dsfaccini Apr 9, 2026
0ae5b8e
Merge remote-tracking branch 'upstream/main' into tool-choice
dsfaccini Apr 21, 2026
32d1a59
Address PR review: relax tool_choice validator, fix Bedrock/Anthropic…
dsfaccini Apr 22, 2026
b1ffee3
Fix OpenAI Responses dropping tool_choice with builtin-only tools
dsfaccini Apr 22, 2026
181fa8d
Cover unified `thinking` branch in Bedrock `_is_thinking_enabled`
dsfaccini Apr 22, 2026
2fa237f
Check `_support_tool_forcing` in OpenAI tuple branches
dsfaccini Apr 22, 2026
e1add20
Hoist `direct_model_request` and `OpenAIModelProfile` imports to top
dsfaccini Apr 22, 2026
78ebfff
Split Bedrock thinking+output-tools test instead of parametrizing
dsfaccini Apr 22, 2026
4898065
Warn on partial-invalid tool names in `tool_choice` list
dsfaccini Apr 23, 2026
d235a2f
Filter `tool_defs` in single-tool fallback when forcing isn't supported
dsfaccini May 1, 2026
f47ded9
Merge remote-tracking branch 'upstream/main' into tool-choice
dsfaccini May 1, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/api/settings.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@
inherited_members: true
members:
- ModelSettings
- UsageLimits
- ToolOrOutput
Comment thread
dsfaccini marked this conversation as resolved.
Comment thread
devin-ai-integration[bot] marked this conversation as resolved.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This replaces UsageLimits with ToolOrOutput instead of keeping both. UsageLimits should remain in this API docs page — it was there for a reason. Please add ToolOrOutput as a new member rather than replacing the existing one:

members:
  - ModelSettings
  - ToolOrOutput
  - UsageLimits

(An earlier auto-review already flagged this — just confirming it's still unfixed.)

66 changes: 66 additions & 0 deletions docs/tools-advanced.md
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to self: thoroughly review docs after code and tests are OK

Original file line number Diff line number Diff line change
Expand Up @@ -351,6 +351,72 @@ You can use `prepare_tools` to:

If both per-tool `prepare` and agent-wide `prepare_tools` are used, the per-tool `prepare` is applied first to each tool, and then `prepare_tools` is called with the resulting list of tool definitions.

## Tool Choice {#tool-choice}

The `tool_choice` setting in [`ModelSettings`][pydantic_ai.settings.ModelSettings] controls which tools the model can use during a request. This is useful for disabling tools, forcing tool use, or restricting which tools are available.

!!! warning "Per-Run Setting"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DouweM Per your unresolved comment — this docs section needs a thorough review before merge. A few things I noticed:

  1. The warning admonition mentions "prepare_tools function above" as a workaround for per-request control, but prepare_tools operates on tool definitions (filtering/modifying them), not on tool_choice. This is misleading — prepare_tools won't help users set tool_choice per-step.

  2. The test="skip" attribute is missing from the code example (line 381). Per project guidelines, doc examples should be tested unless there's a good reason not to. This example uses TestModel() so it should be testable.

  3. The provider support table (lines 412–420) is missing Mistral and xAI entries — both are implemented in this PR. (This was flagged in an earlier auto-review and appears still unfixed.)

This feature is a stepping-stone to expanded functionality, like forcing an agent to call tools at specific steps.
Used directly, the `tool_choice` setting applies to **every model request** in an agent run:

- `'required'` forces a tool call at every step, not just the first
- `['tool_a']` restricts to that tool for the entire run, preventing completion if the agent needs output tools

If you want to use it for per-request control, use [direct model requests](direct.md) or the `prepare_tools` function above.

Pydantic AI distinguishes between **[function tools](tools.md)** (tools you register via `@agent.tool`, [toolsets](toolsets.md), or [MCP](mcp/client.md)), and **output tools** (internal tools used for [structured output](output.md#tool-output)).

### Options

| Value | Description |
|-------|-------------|
| `'auto'` (default) | Model decides whether to use tools. All tools available. |
| `'none'` | Disable function tools. Model can respond with text or use output tools. |
| `'required'` | Force the model to use a tool. All tools remain available. |
Comment thread
dsfaccini marked this conversation as resolved.
Outdated
| `['tool_a', ...]` | Restrict to specific tools by name (can include output tool names). |
Comment thread
dsfaccini marked this conversation as resolved.
Outdated
| [`ToolOrOutput`][pydantic_ai.settings.ToolOrOutput]`(function_tools=['...'])` | Restrict function tools while auto-including all output tools. |

### Example

```python
from pydantic_ai import Agent
from pydantic_ai.models.test import TestModel
from pydantic_ai.settings import ToolOrOutput

agent = Agent(TestModel())
Comment on lines +343 to +366
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these docs will change as seen here https://github.com/dsfaccini/pydantic-ai/pull/2/files once prepare_model_settings drops



@agent.tool_plain
def get_weather(city: str) -> str:
return f'Sunny in {city}'


@agent.tool_plain
def get_time(city: str) -> str:
return f'12:00 in {city}'


# Pass tool_choice via model_settings
result = agent.run_sync('Hello', model_settings={'tool_choice': 'none'})

# Use ToolOrOutput to restrict to specific function tools while allowing output
result = agent.run_sync(
'Hello', model_settings={'tool_choice': ToolOrOutput(function_tools=['get_weather'])}
)
```

### Provider Support

All providers support `'auto'` and `'none'`. Key differences for other options:

| Provider | `'required'` | Specific tools | Notes |
|----------|:------------:|:--------------:|-------|
| OpenAI | ✓ | ✓ | Full support |
| Anthropic | ⚠️ | ⚠️ | Not supported with thinking enabled |
| Google | ✓ | ✓ | |
| Bedrock | ✓ | Single only | Multiple tools fall back to 'any' mode |
| Groq/HuggingFace | ✓ | Single only | Multiple tools fall back to 'required' mode |

Comment thread
dsfaccini marked this conversation as resolved.
## Tool Execution and Retries {#tool-retries}

When a tool is executed, its arguments (provided by the LLM) are first validated against the function's signature using Pydantic (with optional [validation context](output.md#validation-context)). If validation fails (e.g., due to incorrect types or missing required arguments), a `ValidationError` is raised, and the framework automatically generates a [`RetryPromptPart`][pydantic_ai.messages.RetryPromptPart] containing the validation details. This prompt is sent back to the LLM, informing it of the error and allowing it to correct the parameters and retry the tool call.
Expand Down
15 changes: 15 additions & 0 deletions pydantic_ai_slim/pydantic_ai/agent/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -596,6 +596,21 @@ async def main():
if infer_name and self.name is None:
self._infer_name(inspect.currentframe())

# Validate tool_choice - 'required' and list[str] would prevent the agent from ever completing
# because they exclude output tools. These settings are only valid for direct model requests.
# TODO(prepare_model_settings): This validation remains correct for static model_settings.
# The hook CAN return 'required' or list[str] for per-step control (e.g., force tool on
# step 1, then allow completion on step 2+). The hook bypasses this validation because
# it applies dynamically per-request, not statically for the entire run.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the comment is unnecessary because the error already explains why we need this check :)

if model_settings:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need this not just for model settings passed into this method, but also those set on the agent directly. so I think this should be in the agent graph where we combine the model settings before passing them into the model.

tool_choice = model_settings.get('tool_choice')
if tool_choice == 'required' or isinstance(tool_choice, list):
raise exceptions.UserError(
f'tool_choice={tool_choice!r} is not supported in agent.run() because it prevents '
f'the agent from producing a final response. Use ToolOrOutput to combine specific '
f'tools with output capability, or use model.request() for direct model calls.'
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • backticks for code snippets please, also in error messages
  • not model.request, but pydantic_ai.direct.model_request
  • as this will also cover tool_choice set on Agent(model_settings), I'd not mention agent.run (also because there are other methods that go through here!) but instead say smth like Function tool use cannot be forced for an entire agent run as it would prevent the agent from ever completing. Use ...

)

model_used = self._get_model(model)
del model

Expand Down
139 changes: 139 additions & 0 deletions pydantic_ai_slim/pydantic_ai/models/_tool_choice.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
from typing import Literal

from typing_extensions import assert_never

from pydantic_ai.exceptions import UserError
from pydantic_ai.models import ModelRequestParameters
from pydantic_ai.settings import ModelSettings, ToolOrOutput

_AutoOrRequired = Literal['auto', 'required']
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why a separate type var here? Should we use it instead of 'auto', 'required' below? Note that I'd be OK dropping it entirely

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought you asked for this #3611 (comment)
image

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant the entire auto if direct_output else required thing, not the type 😄

ResolvedToolChoice = Literal['none', 'auto', 'required'] | tuple[_AutoOrRequired, list[str]]


def resolve_tool_choice( # noqa: C901
model_settings: ModelSettings | None,
model_request_parameters: ModelRequestParameters,
) -> ResolvedToolChoice:
"""Resolve user-facing tool_choice into a canonical form for providers.

Pydantic AI distinguishes between function tools (e.g. user-registered via @agent.tool)
Comment thread
dsfaccini marked this conversation as resolved.
and output tools (framework-internal for structured output). The user-facing
`tool_choice` setting controls function tools only - this function resolves that
into a canonical form that providers can use, incorporating output tools as needed.

Args:
model_settings: Optional settings containing the tool_choice value.
model_request_parameters: Parameters describing available tools and output configuration.

Returns:
A canonical tool_choice value for providers:

- `'none'`: No tools should be called. Only valid when direct output (text/image) is allowed.
- `'auto'`: Model chooses whether to use tools. Direct output is allowed.
- `'required'`: Model must use a tool. Direct output is not allowed.
- `(tool_names, 'auto')`: Only these tools are available, direct output is allowed.
- `(tool_names, 'required')`: Only these tools are available, must use one.
Comment thread
dsfaccini marked this conversation as resolved.
Outdated

Raises:
UserError: If tool_choice is incompatible with the available tools or output configuration.

Input behavior:
Comment thread
dsfaccini marked this conversation as resolved.

- `None` / `'auto'`: Returns `'auto'` if direct output allowed, else `'required'`.
- `'none'` / `[]`: Disables function tools. If output tools exist, returns them with
appropriate mode. Otherwise returns `'none'`.
- `'required'`: Requires function tool use. Raises if no function tools are defined.
- `list[str]`: Restricts to specified tools with `'required'` mode. Validates tool names.
- `ToolOrOutput`: Combines specified function tools with all output tools.
Returns `'auto'` mode if direct output is allowed, otherwise `'required'`.
"""
function_tool_choice = (model_settings or {}).get('tool_choice')

allow_direct_output = model_request_parameters.allow_text_output or model_request_parameters.allow_image_output

available_tools = set(model_request_parameters.tool_defs.keys())

def _invalid_tools(chosen_tool_names: set[str], available_tools: set[str], *, available_label: str) -> None:
invalid = chosen_tool_names - available_tools
if invalid:
raise UserError(
f'Invalid tool names in `tool_choice`: {invalid}. {available_label}: {available_tools or "none"}'
)

# Default / auto
if function_tool_choice in (None, 'auto'):
return 'auto' if allow_direct_output else 'required'

# none / []: disable function tools, but output tools may still exist
elif function_tool_choice in ('none', []):
output_tool_names = [t.name for t in model_request_parameters.output_tools]

if output_tool_names:
if allow_direct_output:
mode: _AutoOrRequired = 'auto'
elif model_request_parameters.function_tools:
mode = 'required'
else:
return 'required' # only output tools exist and direct output isn't allowed

return (mode, output_tool_names)

if allow_direct_output:
return 'none'

# pragma: no cover
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The # pragma: no cover comment is on the wrong line — it's on line 83 as a standalone comment, but pragma directives need to be on the same line as the code they apply to (line 84). Also, the assert False here is the right approach for an invariant that should never be reached, but the pragma should be # pragma: no cover on the assert line itself:

assert False, 'Either output_tools or allow_text_output/allow_image_output must be set'  # pragma: no cover

assert False, 'Either output_tools or allow_text_output/allow_image_output must be set'

# required (only function tools allowed)
elif function_tool_choice == 'required':
if not model_request_parameters.function_tools:
Comment thread
DouweM marked this conversation as resolved.
raise UserError(
'`tool_choice` was set to "required", but no function tools are defined. '
'Please define function tools or change `tool_choice` to "auto" or "none".'
)
return 'required'

# list[str]: required, restricted to these tools
elif isinstance(function_tool_choice, list):
# unique names; doesn't retain order, but that's ok https://github.com/pydantic/pydantic-ai/pull/3611#discussion_r2677595474
Comment thread
dsfaccini marked this conversation as resolved.
Outdated
chosen_set = set(function_tool_choice)
# we'll only raise here if none of the chosen tools are valid https://github.com/pydantic/pydantic-ai/pull/3611#discussion_r2677602549
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment on this line links to a GitHub discussion thread, which is fine for internal context during review but shouldn't be in the shipped code. Please remove the URL comment — the behavior is clear enough from the code and can be documented in the docstring if needed.

if chosen_set - available_tools == chosen_set:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if not (chosen_set & available_tools) is more clear isn't it?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I very much prefer mine lol

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per @DouweM's unresolved comment: if not (chosen_set & available_tools) reads more naturally as "if there's no overlap". The chosen_set - available_tools == chosen_set pattern requires a moment to parse.

_invalid_tools(chosen_set, available_tools, available_label='Available tools')
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to inline the error and remove the helper, as we now have 2 levels of ifs and I'd prefer to have just one so it's more clear in what scenario we actually raise.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as the one aboveif not (chosen_function_set & all_function_tool_names) would be clearer here too.


if chosen_set == available_tools:
return 'required'

# tests require a deterministic order
return ('required', sorted(chosen_set))
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it make sense to let the user specific a list[str], but it makes for cleaner data if this method returns a set[str] by itself or in the tuple. That way we don't have to convert from a set back to a list, and the consumer knows that this is already unique + order doesn't matter.


# ToolOrOutput: specific function tools + all output tools or direct text/image output
elif isinstance(function_tool_choice, ToolOrOutput):
output_tool_names = [t.name for t in model_request_parameters.output_tools]

# stable order, unique
Comment thread
dsfaccini marked this conversation as resolved.
Outdated
if not function_tool_choice.function_tools:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to move the "ToolOrOutput with 0 tools" scenario to the if function_tool_choice in ('none', []): branch since I think we'd want to handle it the same?

As it stands, the body of this branch is not consistent with that one, which may indicate an issue? (Unsure, haven't verified)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per @DouweM's unresolved comment: the ToolOrOutput with 0 function tools case (lines 111–115) does essentially the same thing as the 'none' / [] branch above it. Consider merging them, or at least noting why they're separate, to reduce duplication and make the behavior clearer.

if output_tool_names:
return 'auto' if allow_direct_output else 'required'
return 'none'
Comment thread
devin-ai-integration[bot] marked this conversation as resolved.
Comment thread
dsfaccini marked this conversation as resolved.

chosen_function_set = set(function_tool_choice.function_tools)
all_function_tool_names = {t.name for t in model_request_parameters.function_tools}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's confusing / inconsistent that output_tool_names is a list, while all_function_tool_names is a set. I'd rather use sets consistently

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per @DouweM's unresolved comment: output_tool_names is built as a set (line 109) but all_function_tool_names is also a set (line 118) — good — but earlier in the 'none' branch (line 68), output_tool_names is also a set. The inconsistency is mainly that chosen_function_set is a set, function_tool_choice.function_tools is a list, and the tuple values are set[str]. Please use sets consistently throughout to avoid unnecessary conversions.

# same as above - only raise if none are valid
if not chosen_function_set - all_function_tool_names == chosen_function_set:
Comment thread
dsfaccini marked this conversation as resolved.
Outdated
_invalid_tools(
chosen_function_set,
all_function_tool_names,
available_label='Available function tools',
)
Comment thread
dsfaccini marked this conversation as resolved.
Outdated
Comment thread
dsfaccini marked this conversation as resolved.
Outdated

# tests require a deterministic order
allowed_tools = sorted([*chosen_function_set, *output_tool_names])
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

juggling sets and lists and needing this to be sorted makes this all harder to follow than it needs to be... if the tests require a specific order, wouldn't that be the most appropriate place to turn the set into a sorted list?

if set(allowed_tools) == available_tools:
return 'required'
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is incorrect if allow_direct_output is enabled, right? Because this would end up accidentally disallowing direct output, while we should just return auto in this case.


# If direct output is allowed, use 'auto' mode to permit text/image responses
mode: _AutoOrRequired = 'auto' if allow_direct_output else 'required'
return (mode, allowed_tools)
else:
assert_never(function_tool_choice)
Loading
Loading