Skip to content

AGUIEventStream emits TOOL_CALL_ARGS after TOOL_CALL_END for same tool_call_id (stray delta), violating AG-UI protocol #4733

@Ali-Aleph-Alpha

Description

@Ali-Aleph-Alpha

Initial Checks

Description

Bug Description

AGUIEventStream.handle_tool_call_delta receives a ToolCallPartDelta after handle_builtin_tool_call_end has already been called for the same tool_call_id. This causes a TOOL_CALL_ARGS SSE event to be emitted after TOOL_CALL_END for the same tool call, which violates the AG-UI protocol and crashes the @ag-ui/client verifier.

Observed SSE Output (from curling the backend)

TOOL_CALL_START  (id: mcp_22b...)
TOOL_CALL_ARGS   (id: mcp_22b..., delta: '{"action":"call_tool",...,"tool_args":')
TOOL_CALL_ARGS   (id: mcp_22b..., delta: '}')
TOOL_CALL_END    (id: mcp_22b...)
TOOL_CALL_ARGS   (id: mcp_22b..., delta: '}')   <- stray delta AFTER end

The final TOOL_CALL_ARGS event arrives after TOOL_CALL_END, which is invalid per the AG-UI protocol spec.

Steps to Reproduce

  1. Use MCPServerTool with the OpenAI Responses API model in streaming mode via the AG-UI endpoint.
  2. Trigger a tool call that emits multiple argument deltas.
  3. Inspect the raw SSE stream (e.g. with curl).
  4. Observe a TOOL_CALL_ARGS event emitted after TOOL_CALL_END for the same tool_call_id.

Expected Behavior

No TOOL_CALL_ARGS events should be emitted for a tool_call_id after TOOL_CALL_END has been sent for that ID. The stream should be ordered: TOOL_CALL_START → one or more TOOL_CALL_ARGSTOOL_CALL_END.

Actual Behavior

A stray TOOL_CALL_ARGS delta is emitted after TOOL_CALL_END, violating the AG-UI protocol and causing the @ag-ui/client verifier to crash/error.

Related Issues

Neither of the above covers this exact scenario (stray post-end delta).

Environment

  • Model: Internal OpenAI Responses API-compatible endpoint routing to custom inference (streaming mode); not the official OpenAI API
  • Tool type: MCPServerTool
  • Component: AGUIEventStream (AG-UI integration)

Minimal, Reproducible Example

Logfire Trace

No response

Python, Pydantic AI & LLM client version

  • Python: latest
  • Pydantic AI: latest
  • LLM provider SDK: openai (Responses API, streaming mode)

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugReport that something isn't working, or PR implementing a fix

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions