docs: Explain tool execution, validation, and retries #1578
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR adds a new section to the
docs/tools.md
documentation explaining the process of tool execution, validation, and retries within PydanticAI.Justification: Currently, information about how tool execution errors (like
ValidationError
) and explicit retries (ModelRetry
) are handled is not easily discoverable via documentation search. I personally was looking for this information in the tools section of the documentation and couldn't find it even after a few tries (yes, I'm stupid). Adding this dedicated section aims to improve findability and provide users with a clearer understanding of how tools handle errors and retries during execution.Key points covered:
RetryPromptPart
when aValidationError
occurs, allowing the LLM to correct parameters.ModelRetry
exception for tools to explicitly request a retry from their internal logic (e.g., for transient errors or invalid inputs not caught by type validation).ValidationError
andModelRetry
respect the configuredretries
setting.