Skip to content

Conversation

@bendrucker
Copy link
Contributor

@bendrucker bendrucker commented Dec 19, 2025

Adds tool approval integration for the Vercel AI adapter, enabling human-in-the-loop workflows with AI SDK UI.

Summary

  • Add opt-in enable_tool_approval flag for simplified tool approval workflows
  • Emit tool-approval-request chunks for deferred tool approvals
  • Emit tool-output-denied chunks when user denies tool execution
  • Auto-extract approval responses from follow-up requests
  • Add approval field to tool parts with ToolApprovalRequested/ToolApprovalResponded states

Usage

from pydantic_ai import Agent
from pydantic_ai.tools import DeferredToolRequests
from pydantic_ai.ui.vercel_ai import VercelAIAdapter

agent: Agent[None, str | DeferredToolRequests] = Agent(
    'openai:gpt-5',
    output_type=[str, DeferredToolRequests],
)

@agent.tool_plain(requires_approval=True)
def delete_file(path: str) -> str:
    """Delete a file from the filesystem."""
    os.remove(path)
    return f'Deleted {path}'

@app.post('/chat')
async def chat(request: Request) -> Response:
    adapter = await VercelAIAdapter.from_request(request, agent=agent, enable_tool_approval=True)
    return adapter.streaming_response(adapter.run_stream())

When enable_tool_approval=True, the adapter will:

  1. Emit tool-approval-request chunks when tools with requires_approval=True are called
  2. Automatically extract approval responses from follow-up requests
  3. Emit tool-output-denied chunks for rejected tools, passing the denial reason through as ToolDenied

AI SDK Tool Approval Protocol

Tool approval is an AI SDK v6 feature. Here's how the protocol works:

Protocol Flow

sequenceDiagram
    participant Model
    participant Server
    participant Client

    Model->>Server: tool call
    Server->>Client: tool-input-start
    Server->>Client: tool-input-available
    Server->>Client: tool-approval-request

    Note over Client: User approves/denies

    Client->>Server: approval response (next request)

    alt Approved
        Server->>Server: Execute tool
        Server->>Client: tool-output-available
    else Denied
        Server->>Client: tool-output-denied
        Server->>Model: denial info
    end
Loading

Chunk Types

tool-approval-request

Server → client:

{
  "type": "tool-approval-request",
  "approvalId": "<uuid>",
  "toolCallId": "<tool-call-id>"
}

tool-output-denied

Server → client, when denied:

{
  "type": "tool-output-denied",
  "toolCallId": "<tool-call-id>"
}

Why the Server Emits This Chunk

"Pydantic AI doesn't have an event for denials because denials never originate in Pydantic AI itself, they come from the user in the form of DeferredToolResults passed into agent.run" — #3760 comment

This is correct—the denial decision originates from the user via the ToolApprovalResponded field in tool parts. However, the AI SDK protocol expects the server to emit tool-output-denied for two reasons:

  1. The client needs confirmation that the tool lifecycle is complete. When the server receives a denial and emits tool-output-denied, the client can transition its UI from "awaiting result" to "denied".

  2. Just as the server emits tool-output-available when a tool executes successfully, it emits tool-output-denied when execution is skipped due to denial. This gives the client a consistent signal for each tool call's final state.

The flow is:

  1. Client sends denial via approval: { id, approved: false, reason } in the tool part
  2. Server extracts this from deferred_tool_results and passes it to the agent
  3. When processing the denied tool result, server emits tool-output-denied to the client
  4. Client updates UI to show the tool was denied

Tool Part Approval States

Tool parts include an approval field tracking the approval lifecycle:

Awaiting Response

{ "id": "<approval-id>" }

User Responded

{ "id": "<approval-id>", "approved": true/false, "reason": "optional" }

Two-Step Flow

Unlike regular tool calls, approved tools require two model interactions:

  1. Model requests tool → server returns with tool-approval-request → awaits user
  2. User decision sent → server executes (if approved) or informs model (if denied)

Changes

  • Add ToolApprovalRequested/ToolApprovalResponded types and approval field to tool UI parts
  • Add ToolApprovalRequestChunk and ToolOutputDeniedChunk response chunks to event stream
  • Add enable_tool_approval parameter to VercelAIAdapter.from_request() with auto-extraction of approval responses via deferred_tool_results cached property
  • Pass denial reason through as ToolDenied when provided
  • Add tool approval section to Vercel AI docs

Testing

  • Approval request/denied chunk emission, opt-out behavior, and approval extraction (approved, denied with reason, no approval)
  • Snapshot updates for approval field on tool parts

References

@bendrucker bendrucker force-pushed the vercel-ai-tool-approval branch 2 times, most recently from 099f07a to 1160591 Compare December 19, 2025 05:03
@bendrucker bendrucker marked this pull request as ready for review December 19, 2025 05:27
@bendrucker
Copy link
Contributor Author

Thanks for the quick reviews! Will get the test issue fixed and reply to your comments shortly.

@bendrucker bendrucker force-pushed the vercel-ai-tool-approval branch 2 times, most recently from 2deeb25 to 9eebd80 Compare December 20, 2025 11:03
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll take a closer look at docs later, will focus primarily on the code for now

@github-actions
Copy link

This PR is stale, and will be closed in 3 days if no reply is received.

@github-actions github-actions bot added the Stale label Dec 30, 2025
@github-actions
Copy link

github-actions bot commented Jan 3, 2026

Closing this PR as it has been inactive for 10 days.

@github-actions github-actions bot closed this Jan 3, 2026
@bendrucker
Copy link
Contributor Author

Have been on a break for a bit, will get the comments addressed early this week if not today.

@DouweM DouweM removed the Stale label Jan 5, 2026
@DouweM
Copy link
Collaborator

DouweM commented Jan 5, 2026

@bendrucker Thanks Ben, I should probably have disabled the stale bot before the holidays :)

@DouweM DouweM reopened this Jan 5, 2026
@DouweM DouweM added feature New feature request, or PR implementing a feature (enhancement) size: M Medium PR (101-500 weighted lines) labels Jan 6, 2026
@bendrucker
Copy link
Contributor Author

Thanks for your patience! Made the requested changes and provided a reference to relevant AI SDK test snapshots for the behavioral question.

@bendrucker
Copy link
Contributor Author

Squashing another upstream cause of a test flake: pytest-dev/pytest-xdist#1299

@DouweM
Copy link
Collaborator

DouweM commented Jan 23, 2026

@bendrucker I merged some other changes for the Vercel AI event stream that had been in the works for a few weeks -- can you resolve the conflicts please?

@bendrucker
Copy link
Contributor Author

Resolved!

return cls(
agent=agent,
run_input=cls.build_run_input(await request.body()),
accept=request.headers.get('accept'),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't be repeating super class implementation here; can we call the super method and then set enable_tool_approval directly on the returned instance?

We could also choose not to have this method override at all, and just tell the user to set the flag like that directly.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that if we override methods, we should also support this flag on dispatch_request, as that's the main way people use these UI adapters. If that's hard to do, perhaps it'd be better to always have the user build the adapter, set the flag, and then call streaming_response as in your example.

I also wonder if there's a way we can detect from the incoming API request whether the frontend is using AI SDK UI v6 or later? In that case we may not need the flag at all.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also wonder if there's a way we can detect from the incoming API request whether the frontend is using AI SDK UI v6 or later? In that case we may not need the flag at all.

Good call, I'll look into whether they have a protocol version or whether they do feature detection based on the requests.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The extent of explicit versioning is the x-vercel-ai-ui-message-stream: v1 header. It comes from the server. There's no Accept header or similar negotiation from client to server.

The client is strict about what the server sends, only known chunk types, only known keys:

https://github.com/vercel/ai/blob/7839e79ddab8e095691a395853d97cbc6df85073/packages/ai/src/ui-message-stream/ui-message-chunks.ts#L16-L180

Any validation error against this schema interrupts streaming.

There's nothing we can detect in the client requests on the initial interaction. We'd send a tool approval request to the client and if it's v5 it just would never be able to provide an approval-responded part.

@DouweM
Copy link
Collaborator

DouweM commented Jan 30, 2026

@bendrucker Please have a look at the conflicts

- Inline extract_deferred_tool_results logic into from_request()
- Remove unit tests for private _extract_deferred_tool_results method
- Remove unit tests for private _denied_tool_ids property
- Update test_tool_output_denied_chunk_emission to use public interface
- Update test_tool_output_denied_chunk_emission to use from_request()
  with explicit type binding to test the full public interface
- Remove test_from_request_with_tool_approval_enabled (now redundant)
- Remove test_deferred_tool_results_fallback_from_instance (tested
  internal plumbing rather than observable behavior)
- Rename tool_approval to enable_tool_approval (add enable_ prefix)
- Make deferred_tool_results a cached_property instead of instance field
- Use explicit loops for type narrowing in approval extraction
- Simplify tests: remove mocking, use snapshots, move imports to top
- Add run_id=IsStr() to message snapshot (messages do have run_id)
- Apply ruff import sorting fixes
@bendrucker bendrucker force-pushed the vercel-ai-tool-approval branch from e8e8ccc to ac27619 Compare January 30, 2026 19:18
Copy link

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 potential issue.

View issue and 4 additional flags in Devin Review.

Open in Devin Review

@DouweM
Copy link
Collaborator

DouweM commented Jan 31, 2026

@bendrucker
Copy link
Contributor Author

Hmm I thought I remember addressing those but I will take a look later and see what happened to that work.

@bendrucker
Copy link
Contributor Author

Resolved:

  1. Add tool approval integration for Vercel AI adapter #3772 (comment) — We pass ToolDenied(message=...) when there's a denial reason, bool otherwise. DeferredToolApprovalResult was unused before because denial reasons weren't propagated.
  2. Add tool approval integration for Vercel AI adapter #3772 (comment)deferred_tool_results and iter_tool_approvals use comprehensions.
  3. Add tool approval integration for Vercel AI adapter #3772 (comment) — Extracted iter_tool_approvals into request_types.py, shared by both adapter and event stream.
  4. Add tool approval integration for Vercel AI adapter #3772 (comment) — Removed the from_request override entirely. Users set enable_tool_approval on the instance directly.

@DouweM
Copy link
Collaborator

DouweM commented Feb 2, 2026

@bendrucker Once #4166 is merged (should be shortly), please update to use the sdk version flag to auto-enable this instead of the new boolean. You can mention in the docstring for the field that it enables approval handling.

Replace enable_tool_approval boolean with sdk_version parameter:
- sdk_version=6 enables tool approval streaming (human-in-the-loop)
- sdk_version=5 (default) disables tool approval for backward compat

Updated adapter, event stream, tests, and docs accordingly.
@bendrucker bendrucker force-pushed the vercel-ai-tool-approval branch from 8637777 to c19330a Compare February 3, 2026 04:21
@bendrucker
Copy link
Contributor Author

Merged! Totally forgot that somewhere in this PR I investigated the frontend side and found strict schema handling via zod, with errors on unknown keys. Not sure that's sensible/necessary behavior on AI SDK's part, but given that reality, a version attribute is definitely the right move for Pydantic AI. Certainly simplifies things here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

awaiting author revision feature New feature request, or PR implementing a feature (enhancement) size: M Medium PR (101-500 weighted lines)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants