-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Description
Initial Checks
- I'm using the latest version of Pydantic AI
- I've searched for my issue in the issue tracker before opening this issue
Description
After upgrading pydantic-ai-slim from 1.63.0 to 1.64.0+, the behavior of result.new_messages() appears to have changed.
When calling agent.run() with message_history (and without user_prompt), the messages supplied in message_history used to be treated purely as prior context, so result.new_messages() only returned messages generated during the run.
Starting in 1.64.0, new_messages() also returns the user prompt that was provided inside message_history.
For example, the following code now returns the original ModelRequest as part of new_messages() even though it was part of the input history.
This seems related to changes introduced in this PR:
https://github.com/pydantic/pydantic-ai/pull/4419/changes#diff-71730070455237accebd709ba7fe41ce60edbd238af02e7d86a208cf8f10ab12R1543-R1545
The last version that behaves as expected for me is 1.63.0.
Is this an intended change, or a regression? If intentional, is there a recommended way to get only the messages generated during the run?
Old behavior (≤1.63.0)
new_messages() returns only messages generated during the run.
New behavior (≥1.64.0)
new_messages() includes the ModelRequest from message_history, meaning the user prompt "hi" is returned even though it was already part of the provided history.
Minimal, Reproducible Example
result = await agent.run(
message_history=[
ModelRequest(
parts=[
UserPromptPart(
content="hi"
)
]
)
],
deps=deps,
)
print(result.new_messages())Logfire Trace
No response
Python, Pydantic AI & LLM client version
- Python: 3.12
- Pydantic AI: 1.68.0
- LLM provider SDK: doesn't matter