-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Description
Initial Checks
- I'm using the latest version of Pydantic AI
- I've searched for my issue in the issue tracker before opening this issue
Description
When a tool returns a ToolReturn with metadata, the metadata is
correctly propagated to ToolReturnPart.metadata in non-Temporal
agent runs. However, when the same tool runs inside a TemporalAgent,
the metadata field is silently dropped — ToolReturnPart.metadata
is always None.
metadata: Optional metadata that your application can access but
is not sent to the LLM. Useful for logging, debugging, or additional
processing.
Root Cause
The issue is in pydantic_ai/durable_exec/temporal/_toolset.py.
The Temporal wrapper uses an intermediate _ToolReturn dataclass to
serialize tool results across the activity boundary:
# _toolset.py lines 55-58
@dataclass
class _ToolReturn:
result: ToolReturnContent # <-- only captures the raw return value
kind: Literal['tool_return'] = 'tool_return'When a tool returns a ToolReturn object:
-
_wrap_call_tool_result(line 95-98) wraps the entire
ToolReturninstance as_ToolReturn(result=<ToolReturn object>). -
Temporal serializes
_ToolReturnto JSON for the activity result.
TheToolReturndataclass becomes a plain dict:
{"return_value": "...", "metadata": {...}, "content": null, "kind": "tool-return"}. -
On deserialization back in the workflow,
_ToolReturn.resultis
typed asToolReturnContent. Pydantic/Temporal reconstructs it as
a plaindict, not aToolReturninstance. -
_unwrap_call_tool_result(line 106-108) returns
result.result— which is now a plain dict. -
Back in
_agent_graph.pyline 1360, the check
isinstance(tool_result, ToolReturn)returnsFalse(it's a dict). -
The else branch (line 1362-1385) wraps it as a generic return value
and constructs aToolReturnPartwithout metadata.
The content field of ToolReturn is also lost by the same mechanism.
Suggested Fix
The _ToolReturn dataclass should preserve all three fields of
ToolReturn so they survive the serialization round-trip:
@dataclass
class _ToolReturn:
result: ToolReturnContent
content: Sequence[UserContent] | None = None
metadata: Any = None
kind: Literal['tool_return'] = 'tool_return'And the wrap/unwrap methods should decompose and reconstruct:
async def _wrap_call_tool_result(self, coro: Awaitable[Any]) -> CallToolResult:
try:
result = await coro
if isinstance(result, ToolReturn):
return _ToolReturn(
result=result.return_value,
content=result.content,
metadata=result.metadata,
)
return _ToolReturn(result=result)
except ApprovalRequired as e:
return _ApprovalRequired(metadata=e.metadata)
except CallDeferred as e:
return _CallDeferred(metadata=e.metadata)
except ModelRetry as e:
return _ModelRetry(message=e.message)
def _unwrap_call_tool_result(self, result: CallToolResult) -> Any:
if isinstance(result, _ToolReturn):
if result.content is not None or result.metadata is not None:
return ToolReturn(
return_value=result.result,
content=result.content,
metadata=result.metadata,
)
return result.result
elif isinstance(result, _ApprovalRequired):
raise ApprovalRequired(metadata=result.metadata)
elif isinstance(result, _CallDeferred):
raise CallDeferred(metadata=result.metadata)
elif isinstance(result, _ModelRetry):
raise ModelRetry(result.message)
else:
assert_never(result)Impact
Any user of TemporalAgent who relies on ToolReturn.metadata or
ToolReturn.content will find these fields silently set to None in
the resulting message history. There is no error or warning — the data
is simply lost.
Workaround
Encode the metadata into the return_value string itself (e.g. as a
JSON-encoded prefix or structured string) and parse it back out in the
workflow. Alternatively, scan ToolCallPart.args from the
ModelResponse messages instead of relying on ToolReturnPart.metadata.
Minimal, Reproducible Example
from pydantic_ai import Agent, RunContext, ToolReturn
agent = Agent("openai:gpt-4o-mini")
@agent.tool_plain
def my_tool(query: str) -> ToolReturn:
return ToolReturn(
return_value=f"Processed: {query}",
metadata={"request_id": "abc-123", "source": "my_tool"},
)
# --- Local run: metadata is preserved ---
result = agent.run_sync("test")
for msg in result.all_messages():
for part in msg.parts:
if hasattr(part, "metadata"):
print(f"Local — metadata: {part.metadata}")
# Output: Local — metadata: {'request_id': 'abc-123', 'source': 'my_tool'}
# --- TemporalAgent run: metadata is None ---
# (same agent wrapped with TemporalAgent, run inside a Temporal workflow)
# Output: Temporal — metadata: NoneLogfire Trace
No response
Python, Pydantic AI & LLM client version
- Python: 3.13
- Pydantic AI: 1.68
- LLM provider SDK: AsyncOpenAi