Skip to content

fix: propagate thoughtSignature during streaming#268

Merged
ScottMansfield merged 2 commits intogoogle:mainfrom
RobinClowers:fix/propagate-thought-signature-to-concurrent-function-calls
Apr 16, 2026
Merged

fix: propagate thoughtSignature during streaming#268
ScottMansfield merged 2 commits intogoogle:mainfrom
RobinClowers:fix/propagate-thought-signature-to-concurrent-function-calls

Conversation

@RobinClowers
Copy link
Copy Markdown
Contributor

@RobinClowers RobinClowers commented Apr 14, 2026

Link to Issue or Description of Change

Gemini thinking models only provide thoughtSignature on the first functionCall part in a streaming turn. When the model issues multiple concurrent function calls, subsequent parts lack the signature. The API requires matching thoughtSignature values on all function response parts, causing 400 errors when the ADK sends responses back.

Problem:
Gemini thinking models only provide thoughtSignature on the first functionCall part in a streaming turn. When the model issues multiple concurrent function calls, subsequent parts lack the signature. The API requires matching thoughtSignature values on all function response parts, causing 400 errors when the ADK sends responses back.

Solution:
Save the first thoughtSignature seen on a function call part and inject it into any subsequent function call parts that are missing it.

Testing Plan

We are using this patch (applied at runtime) in ADK in production today.

Unit Tests:

  • I have added or updated unit tests for my change.
  • All unit tests pass locally.
 Test Files  104 passed | 11 skipped (115)
      Tests  1093 passed | 21 skipped (1114)

Manual End-to-End (E2E) Tests:

Setup: Create an agent with 2+ simple tools and a prompt that encourages parallel calls:

import { LlmAgent, FunctionTool, InMemoryRunner } from '@google/adk';
import { createUserContent } from '@google/genai';
import { z } from 'zod';

const getWeather = new FunctionTool({
  name: 'get_weather',
  description: 'Get current weather for a city',
  inputSchema: z.object({ city: z.string() }),
  execute: async ({ city }) => ({ city, temp: '72F', condition: 'sunny' }),
});

const getTime = new FunctionTool({
  name: 'get_time',
  description: 'Get current time in a city',
  inputSchema: z.object({ city: z.string() }),
  execute: async ({ city }) => ({ city, time: '2:30 PM' }),
});

const agent = new LlmAgent({
  name: 'test_agent',
  model: 'gemini-3-flash-preview', // must be a thinking model
  instruction: 'You help with travel planning. Use tools in parallel when possible.',
  tools: [getWeather, getTime],
});

const runner = new InMemoryRunner({ agent, appName: 'test' });
const session = await runner.sessionService.createSession({
  appName: 'test',
  userId: 'test',
});

for await (const event of runner.runAsync({
  userId: 'test',
  sessionId: session.id,
  newMessage: createUserContent(
    'What is the weather and time in Seattle, Tokyo, and London?'
  ),
})) {
  console.log(event.author, JSON.stringify(event.content?.parts?.map(p =>
    p.functionCall ? `call:${p.functionCall.name}(sig:${!!p.thoughtSignature})` :
    p.functionResponse ? `resp:${p.functionResponse.name}` :
    p.text ? `text:${p.text.slice(0, 60)}` : '?'
  )));
}

What to look for:

  • Without the fix: The run fails with a 400 error on the second model call (after function responses are sent back). You'll see the function calls go out, the responses come back, then a crash.
  • With the fix: The run completes. In the logged output, all function call parts should show sig:true. The model produces a final text response summarizing weather and time.

Key conditions to reproduce:

  • Must use a thinking model (gemini-3-flash-preview, gemini-3-pro-preview)
  • Must use streaming (which is the default for Runner)
  • The prompt must trigger 2+ concurrent function calls in a single turn — asking about multiple cities with multiple tools is a reliable way to get this
  • It's somewhat nondeterministic — the model might choose sequential calls instead. The multi-city prompt usually works but you may need to retry

Checklist

  • I have read the CONTRIBUTING.md document.
  • I have performed a self-review of my own code.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have added tests that prove my fix is effective or that my feature works.
  • New and existing unit tests pass locally with my changes.
  • I have manually tested my changes end-to-end.
  • Any dependent changes have been merged and published in downstream modules.

@google-cla
Copy link
Copy Markdown

google-cla Bot commented Apr 14, 2026

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@ScottMansfield
Copy link
Copy Markdown
Member

Thank you for the patch! I will take a look. Meanwhile, the CLA is needed for any contributions, please look at filling it out. It might be a company one for you if this is on behalf of your company.

https://cla.developers.google.com/

@ScottMansfield ScottMansfield self-requested a review April 14, 2026 23:00
@RobinClowers RobinClowers force-pushed the fix/propagate-thought-signature-to-concurrent-function-calls branch from ccb621b to 5eb0128 Compare April 16, 2026 16:19
@RobinClowers
Copy link
Copy Markdown
Contributor Author

@ScottMansfield my org already had one, I amended my commit to use my work email.

Comment thread core/src/models/google_llm.ts
…ming

Gemini thinking models only provide `thoughtSignature` on the first
`functionCall` part in a streaming turn. When the model issues multiple
concurrent function calls, subsequent parts lack the signature. The API
requires matching `thoughtSignature` values on all function response
parts, causing 400 errors when the ADK sends responses back.

Save the first `thoughtSignature` seen on a function call part and
inject it into any subsequent function call parts that are missing it.
@RobinClowers RobinClowers force-pushed the fix/propagate-thought-signature-to-concurrent-function-calls branch from 5eb0128 to 4d359f2 Compare April 16, 2026 17:46
@ScottMansfield ScottMansfield merged commit 8cd6360 into google:main Apr 16, 2026
7 checks passed
@ScottMansfield
Copy link
Copy Markdown
Member

all done, thank you!

@kalenkevich kalenkevich mentioned this pull request Apr 16, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants