-
Notifications
You must be signed in to change notification settings - Fork 108
Description
When using a "SequentialAgent" to orchestrate multiple LlmAgents, the sub-agents completely ignore their local instruction configurations (both static strings and dynamic functions).
This happens because there is an overly strict validation check inside the framework that aborts the instruction-building process if the rootAgent is not an LlmAgent.
Because SequentialAgent extends BaseAgent and not LlmAgent, all sub-agents running inside it fail this check and their instructions (like XML prompts, schema rules, or session state injection) are never appended to the LLM request. The models end up receiving empty system instructions.
Steps to Reproduce:
- Create dynamic instructions for an LlmAgent where state is needed:
import { ReadonlyContext } from '@google/adk';
export function getDynamicInstruction(context: ReadonlyContext): string {
const dataSets = context.state?.get('dataSets') || [];
const prompt = `
<agent_config>
<role>Testing.</role>
<context>List of DataSets: ${dataSets}</context>
<output_format>
Answar ONLY with this JSON:
{
"code": string, "name": string, "description": string
}
</output_format>
</agent_config>
`;
return prompt;
}- Create an LlmAgent linking the instruction function:
export const datasetAgent = new LlmAgent({
model: config.model,
name: 'datasetAgent',
description: agentDescription,
instruction: getDynamicInstruction, // <-- This is completely ignored
// ... callbacks and schemas
});- Wrap the LlmAgent inside a SequentialAgent:
export const analysisPipelineAgent = new SequentialAgent({
name: 'analysisPipelineAgent',
subAgents: [datasetAgent, analysisAgent]
});- Run the session using analysisPipelineAgent as the root agent. Notice that getDynamicInstruction is never executed, dataSets state is never resolved, and the model outputs random text ignoring the JSON rules.
Root Cause Analysis: The bug is explicitly located at core/src/agents/llm_agent.ts inside InstructionsLlmRequestProcessor.runAsync():
const agent = invocationContext.agent;
if (!isLlmAgent(agent) || !isLlmAgent(agent.rootAgent)) {
return; // <-- Immediate return. Aborts before evaluating local `agent.instruction`
}Since agent is datasetAgent (isLlmAgent = true), but agent.rootAgent is analysisPipelineAgent (a SequentialAgent
), the second check fails.
There is a TODO comment nearby (TODO - b/425992518: unexpected and buggy for performance. Global instruction should be explicitly scoped.), which suggests the team is aware of rootAgent scoped issues, but this particular strict check causes a severe collateral effect by completely disabling valid local instructions for nested LLM agents.
Expected Behavior: The agent local instructions (Step 2 in the code) should evaluate and append normally, even if the rootAgent (like a Session router or SequentialAgent) is not an LlmAgent.
Proposed Fix: Remove the strict conditional for rootAgent early on, and only check isLlmAgent(rootAgent) when actively parsing globalInstructions:
const agent = invocationContext.agent;
if (!isLlmAgent(agent)) {
return;
}
const rootAgent = agent.rootAgent;
// Step 1: Appends global instructions if set by RootAgent.
if (isLlmAgent(rootAgent) && rootAgent.globalInstruction) {
...
}
// Step 2: Appends agent local instructions if set.
if (agent.instruction) {
// Ahora se evaluará la configuración canonicalInstruction dinámica normalmente
...
}