Skip to content

ChatAnthropic unable to determine when server_tool_calls are final while streaming #9980

@aaronjwhiteside-appdirect

Description

Checked other resources

  • This is a bug, not a usage question. For questions, please use the LangChain Forum (https://forum.langchain.com/).
  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

import { ChatAnthropic } from "@langchain/anthropic";
import { AIMessage, AIMessageChunk, ContentBlock, HumanMessage, SystemMessage } from "@langchain/core/messages";
import dotenv from "dotenv";

dotenv.config({ path: "xxxxxxxx/.env" });

const main = async () => {
  const model = new ChatAnthropic({
    model: "claude-sonnet-4-5",
    apiKey: process.env.ANTHROPIC_API_KEY!,
    clientOptions: {
      fetch: async (url: string | RequestInfo | URL, init?: RequestInit) => {
        console.log("request", init?.body);
        return fetch(url, init);
      },
    },
    thinking: {
      type: "enabled",
      budget_tokens: 4096,
    },
  });
  const modelWithTools = model.bindTools([
    {
      type: "web_search_20250305",
      name: "web_search",
      max_uses: 5,
    },
  ]);

  const messages = [
    new SystemMessage({
      content: "You are a helpful assistant that answers questions about the world.",
    }),
    new HumanMessage({
      contentBlocks: [{ type: "text", text: "latest news in Melbourne Australia" } satisfies ContentBlock.Text],
    }),
  ];

  const stream = await modelWithTools.stream(messages);

  let chunks: AIMessageChunk | null = null;
  for await (const chunk of stream) {
    console.log("chunk", chunk);
    chunks = chunks ? chunks.concat(chunk) : chunk;

    const serverToolCalls = chunks.contentBlocks.filter(
      (block): block is ContentBlock.Tools.ServerToolCall => block.type === "server_tool_call",
    );
    const serverToolResults = chunks.contentBlocks.filter(
      (block): block is ContentBlock.Tools.ServerToolCallResult => block.type === "server_tool_call_result",
    );
    if (serverToolCalls.length > 0) {
      console.log("serverToolCalls", serverToolCalls);
    }
    if (serverToolResults.length > 0) {
      console.log("serverToolResults", serverToolResults);
    }
  }
  const response = new AIMessage(chunks!);

  console.log(response);
};

void main();

Error Message and Stack Trace (if applicable)

chunk AIMessageChunk {
  "content": [
    {
      "index": 1,
      "type": "server_tool_use",
      "id": "srvtoolu_01JrTTjVAqGPocbhUibQjXAK",
      "name": "web_search",
      "input": ""
    }
  ],
  "additional_kwargs": {},
  "response_metadata": {
    "model_provider": "anthropic"
  },
  "tool_calls": [],
  "tool_call_chunks": [],
  "invalid_tool_calls": []
}
serverToolCalls [
  {
    id: 'srvtoolu_01JrTTjVAqGPocbhUibQjXAK',
    type: 'server_tool_call',
    name: 'web_search',
    args: { query: '' }
  }
]
chunk AIMessageChunk {
  "content": [
    {
      "index": 1,
      "input": "",
      "type": "input_json_delta"
    }
  ],
  "additional_kwargs": {},
  "response_metadata": {
    "model_provider": "anthropic"
  },
  "tool_calls": [],
  "tool_call_chunks": [
    {
      "index": 1,
      "args": ""
    }
  ],
  "invalid_tool_calls": [
    {
      "name": "",
      "args": "{}",
      "error": "Malformed args.",
      "type": "invalid_tool_call"
    }
  ]
}
serverToolCalls [
  {
    id: 'srvtoolu_01JrTTjVAqGPocbhUibQjXAK',
    type: 'server_tool_call',
    name: 'web_search',
    args: { query: '' }
  }
]
chunk AIMessageChunk {
  "content": [
    {
      "index": 1,
      "input": "{\"q",
      "type": "input_json_delta"
    }
  ],
  "additional_kwargs": {},
  "response_metadata": {
    "model_provider": "anthropic"
  },
  "tool_calls": [],
  "tool_call_chunks": [
    {
      "index": 1,
      "args": "{\"q"
    }
  ],
  "invalid_tool_calls": [
    {
      "name": "",
      "args": "{\"q",
      "error": "Malformed args.",
      "type": "invalid_tool_call"
    }
  ]
}
serverToolCalls [
  {
    id: 'srvtoolu_01JrTTjVAqGPocbhUibQjXAK',
    type: 'server_tool_call',
    name: 'web_search',
    args: { query: '{"q' }
  }
]
chunk AIMessageChunk {
  "content": [
    {
      "index": 1,
      "input": "ue",
      "type": "input_json_delta"
    }
  ],
  "additional_kwargs": {},
  "response_metadata": {
    "model_provider": "anthropic"
  },
  "tool_calls": [],
  "tool_call_chunks": [
    {
      "index": 1,
      "args": "ue"
    }
  ],
  "invalid_tool_calls": [
    {
      "name": "",
      "args": "ue",
      "error": "Malformed args.",
      "type": "invalid_tool_call"
    }
  ]
}
serverToolCalls [
  {
    id: 'srvtoolu_01JrTTjVAqGPocbhUibQjXAK',
    type: 'server_tool_call',
    name: 'web_search',
    args: { query: '{"que' }
  }
]
chunk AIMessageChunk {
  "content": [
    {
      "index": 1,
      "input": "ry\": \"Melb",
      "type": "input_json_delta"
    }
  ],
  "additional_kwargs": {},
  "response_metadata": {
    "model_provider": "anthropic"
  },
  "tool_calls": [],
  "tool_call_chunks": [
    {
      "index": 1,
      "args": "ry\": \"Melb"
    }
  ],
  "invalid_tool_calls": [
    {
      "name": "",
      "args": "ry\": \"Melb",
      "error": "Malformed args.",
      "type": "invalid_tool_call"
    }
  ]
}
serverToolCalls [
  {
    id: 'srvtoolu_01JrTTjVAqGPocbhUibQjXAK',
    type: 'server_tool_call',
    name: 'web_search',
    args: { query: '{"query": "Melb' }
  }
]
chunk AIMessageChunk {
  "content": [
    {
      "index": 1,
      "input": "ourne",
      "type": "input_json_delta"
    }
  ],
  "additional_kwargs": {},
  "response_metadata": {
    "model_provider": "anthropic"
  },
  "tool_calls": [],
  "tool_call_chunks": [
    {
      "index": 1,
      "args": "ourne"
    }
  ],
  "invalid_tool_calls": [
    {
      "name": "",
      "args": "ourne",
      "error": "Malformed args.",
      "type": "invalid_tool_call"
    }
  ]
}
serverToolCalls [
  {
    id: 'srvtoolu_01JrTTjVAqGPocbhUibQjXAK',
    type: 'server_tool_call',
    name: 'web_search',
    args: { query: '{"query": "Melbourne' }
  }
]
chunk AIMessageChunk {
  "content": [
    {
      "index": 1,
      "input": " Austr",
      "type": "input_json_delta"
    }
  ],
  "additional_kwargs": {},
  "response_metadata": {
    "model_provider": "anthropic"
  },
  "tool_calls": [],
  "tool_call_chunks": [
    {
      "index": 1,
      "args": " Austr"
    }
  ],
  "invalid_tool_calls": [
    {
      "name": "",
      "args": "Austr",
      "error": "Malformed args.",
      "type": "invalid_tool_call"
    }
  ]
}
serverToolCalls [
  {
    id: 'srvtoolu_01JrTTjVAqGPocbhUibQjXAK',
    type: 'server_tool_call',
    name: 'web_search',
    args: { query: '{"query": "Melbourne Austr' }
  }
]
chunk AIMessageChunk {
  "content": [
    {
      "index": 1,
      "input": "al",
      "type": "input_json_delta"
    }
  ],
  "additional_kwargs": {},
  "response_metadata": {
    "model_provider": "anthropic"
  },
  "tool_calls": [],
  "tool_call_chunks": [
    {
      "index": 1,
      "args": "al"
    }
  ],
  "invalid_tool_calls": [
    {
      "name": "",
      "args": "al",
      "error": "Malformed args.",
      "type": "invalid_tool_call"
    }
  ]
}
serverToolCalls [
  {
    id: 'srvtoolu_01JrTTjVAqGPocbhUibQjXAK',
    type: 'server_tool_call',
    name: 'web_search',
    args: { query: '{"query": "Melbourne Austral' }
  }
]
chunk AIMessageChunk {
  "content": [
    {
      "index": 1,
      "input": "ia news to",
      "type": "input_json_delta"
    }
  ],
  "additional_kwargs": {},
  "response_metadata": {
    "model_provider": "anthropic"
  },
  "tool_calls": [],
  "tool_call_chunks": [
    {
      "index": 1,
      "args": "ia news to"
    }
  ],
  "invalid_tool_calls": [
    {
      "name": "",
      "args": "ia news to",
      "error": "Malformed args.",
      "type": "invalid_tool_call"
    }
  ]
}
serverToolCalls [
  {
    id: 'srvtoolu_01JrTTjVAqGPocbhUibQjXAK',
    type: 'server_tool_call',
    name: 'web_search',
    args: { query: '{"query": "Melbourne Australia news to' }
  }
]
chunk AIMessageChunk {
  "content": [
    {
      "index": 1,
      "input": "day\"}",
      "type": "input_json_delta"
    }
  ],
  "additional_kwargs": {},
  "response_metadata": {
    "model_provider": "anthropic"
  },
  "tool_calls": [],
  "tool_call_chunks": [
    {
      "index": 1,
      "args": "day\"}"
    }
  ],
  "invalid_tool_calls": [
    {
      "name": "",
      "args": "day\"}",
      "error": "Malformed args.",
      "type": "invalid_tool_call"
    }
  ]
}
serverToolCalls [
  {
    id: 'srvtoolu_01JrTTjVAqGPocbhUibQjXAK',
    type: 'server_tool_call',
    name: 'web_search',
    args: { query: '{"query": "Melbourne Australia news today"}' }
  }
]
chunk AIMessageChunk {
  "content": [
    {
      "index": 2,
      "type": "web_search_tool_result",
      "tool_use_id": "srvtoolu_01JrTTjVAqGPocbhUibQjXAK",
      "content": [
        "[Object]",
        "[Object]",
        "[Object]",
        "[Object]",
        "[Object]",
        "[Object]",
        "[Object]",
        "[Object]",
        "[Object]",
        "[Object]"
      ]
    }
  ],
  "additional_kwargs": {},
  "response_metadata": {
    "model_provider": "anthropic"
  },
  "tool_calls": [],
  "tool_call_chunks": [],
  "invalid_tool_calls": []
}

Description

Continuation of the Slack conversation: https://langchaincommunity.slack.com/archives/C07EHF3HC87/p1770360994478949

Several issues here:

  1. there seems to be some normalization of web_search server-side tool calls here, the contents of the tool call args are placed under args: { query: ... }

I recommend that any web_search normalization happen with a dedicated ContentBlock, and not silent modification of existing ServerToolCall blocks.

  1. For Anthropic tool call arguments are streamed in incrementally, for OpenAI they appear in the contentBlock fully formed.

I need a way to know when the Anthropic server-side tool calls are complete and final, so I can emit these further upstream.

Waiting until we have a corresponding web_search_tool_result means I'm delaying showing a tool call until we have the response, which isn't ideal UX wise. (people see nothing, are waiting.. and then suddenly see the call AND response at the same time)

Currently my workaround is to try and parse the args.query and if it's valid JSON then emit the server-side tool call.

Ideally I would like to see LangChain handle this by

populating args with the raw server side tool input string, until it's valid JSON (streaming for that tool call is complete), and then replacing the args with the parsed JSON object of the tool calls input.

Streaming:

    args: '{"query": "Melbourne Austr" }'
    args: '{"query": "Melbourne Australia" }'
    args: '{"query": "Melbourne Australia current" }'
    args: '{"query": "Melbourne Australia current news" }'

Streaming finished:

    args: { query: "Melbourne Australia current news today" }

System Info

langchain@1.2.18
@langchain/openai@1.2.5
@langchain/anthropic@1.3.16

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions