| title | Subagents |
|---|
In the subagents architecture, a central main agent (often referred to as a supervisor) coordinates subagents by calling them as tools. The main agent decides which subagent to invoke, what input to provide, and how to combine results. Subagents are stateless—they don't remember past interactions, with all conversation memory maintained by the main agent. This provides context isolation: each subagent invocation works in a clean context window, preventing context bloat in the main conversation.
graph LR
A[User] --> B[Main Agent]
B --> C[Subagent A]
B --> D[Subagent B]
B --> E[Subagent C]
C --> B
D --> B
E --> B
B --> F[User response]
classDef trigger fill:#DCFCE7,stroke:#16A34A,stroke-width:2px,color:#14532D
classDef process fill:#DBEAFE,stroke:#2563EB,stroke-width:2px,color:#1E3A8A
class A,F trigger
class B,C,D,E process
- Centralized control: All routing passes through the main agent
- No direct user interaction: Subagents return results to the main agent, not the user (though you can use interrupts within a subagent to allow user interaction)
- Subagents via tools: Subagents are invoked via tools
- Parallel execution: The main agent can invoke multiple subagents in a single turn
Use the subagents pattern when you have multiple distinct domains (e.g., calendar, email, CRM, database), subagents don't need to converse directly with users, or you want centralized workflow control. For simpler cases with just a few tools, use a single agent.
**Need user interaction within a subagent?** While subagents typically return results to the main agent rather than conversing directly with users, you can use [interrupts](/oss/langgraph/interrupts#pause-using-interrupt) within a subagent to pause execution and gather user input. This is useful when a subagent needs clarification or approval before proceeding. The main agent remains the orchestrator, but the subagent can collect information from the user mid-task.The core mechanism wraps a subagent as a tool that the main agent can call:
:::python
from langchain.tools import tool
from langchain.agents import create_agent
# Create a subagent
subagent = create_agent(model="google_genai:gemini-3.1-pro-preview", tools=[...])
# Wrap it as a tool
@tool("research", description="Research a topic and return findings")
def call_research_agent(query: str):
result = subagent.invoke({"messages": [{"role": "user", "content": query}]})
return result["messages"][-1].content
# Main agent with subagent as a tool
main_agent = create_agent(model="google_genai:gemini-3.1-pro-preview", tools=[call_research_agent])::: :::js
import { createAgent, tool } from "langchain";
import { z } from "zod";
// Create a subagent
const subagent = createAgent({ model: "google_genai:gemini-3.1-pro-preview", tools: [...] });
// Wrap it as a tool
const callResearchAgent = tool(
async ({ query }) => {
const result = await subagent.invoke({
messages: [{ role: "user", content: query }]
});
return result.messages.at(-1)?.content;
},
{
name: "research",
description: "Research a topic and return findings",
schema: z.object({ query: z.string() })
}
);
// Main agent with subagent as a tool
const mainAgent = createAgent({ model: "google_genai:gemini-3.1-pro-preview", tools: [callResearchAgent] });:::
<Card title="Tutorial: Build a personal assistant with subagents" icon="sitemap" href="/oss/langchain/multi-agent/subagents-personal-assistant" arrow cta="Learn more"
Learn how to build a personal assistant using the subagents pattern, where a central main agent (supervisor) coordinates specialized worker agents.
When implementing the subagents pattern, you'll make several key design choices. This table summarizes the options—each is covered in detail in the sections below.
| Decision | Options |
|---|---|
| Sync vs. async | Sync (blocking) vs. async (background) |
| Tool patterns | Tool per agent vs. single dispatch tool |
| Subagent specs | System prompt vs. enum constraint vs. tool-based discovery (single dispatch tool only) |
| Subagent inputs | Query only vs. full context |
| Subagent outputs | Subagent result vs full conversation history |
Subagent execution can be synchronous (blocking) or asynchronous (background). Your choice depends on whether the main agent needs the result to continue.
| Mode | Main agent behavior | Best for | Tradeoff |
|---|---|---|---|
| Sync | Waits for subagent to complete | Main agent needs result to continue | Simple, but blocks the conversation |
| Async | Continues while subagent runs in background | Independent tasks, user shouldn't wait | Responsive, but more complex |
By default, subagent calls are synchronous: the main agent waits for each subagent to complete before continuing. Use sync when the main agent's next action depends on the subagent's result.
sequenceDiagram
participant User
participant Main Agent
participant Research Subagent
User->>Main Agent: "What's the weather in Tokyo?"
Main Agent->>Research Subagent: research("Tokyo weather")
Note over Main Agent: Waiting for result...
Research Subagent-->>Main Agent: "Currently 72°F, sunny"
Main Agent-->>User: "It's 72°F and sunny in Tokyo"
When to use sync:
- Main agent needs the subagent's result to formulate its response
- Tasks have order dependencies (e.g., fetch data → analyze → respond)
- Subagent failures should block the main agent's response
Tradeoffs:
- Simple implementation—just call and wait
- User sees no response until all subagents complete
- Long-running tasks freeze the conversation
Use asynchronous execution when the subagent's work is independent—the main agent doesn't need the result to continue conversing with the user. The main agent kicks off a background job and remains responsive.
sequenceDiagram
participant User
participant Main Agent
participant Job System
participant Contract Reviewer
User->>Main Agent: "Review this M&A contract"
Main Agent->>Job System: run_agent("legal_reviewer", task)
Job System->>Contract Reviewer: Start agent
Job System-->>Main Agent: job_id: "job_123"
Main Agent-->>User: "Started review (job_123)"
Note over Contract Reviewer: Reviewing 150+ pages...
User->>Main Agent: "What's the status?"
Main Agent->>Job System: check_status(job_id)
Job System-->>Main Agent: "running"
Main Agent-->>User: "Still reviewing contract..."
Note over Contract Reviewer: Review completes
User->>Main Agent: "Is it done yet?"
Main Agent->>Job System: check_status(job_id)
Job System-->>Main Agent: "completed"
Main Agent->>Job System: get_result(job_id)
Job System-->>Main Agent: Contract analysis
Main Agent-->>User: "Review complete: [findings]"
When to use async:
- Subagent work is independent of the main conversation flow
- Users should be able to continue chatting while work happens
- You want to run multiple independent tasks in parallel
Three-tool pattern:
- Start job: Kicks off the background task, returns a job ID
- Check status: Returns current state (pending, running, completed, failed)
- Get result: Retrieves the completed result
Handling job completion: When a job finishes, your application needs to notify the user. One approach: surface a notification that, when clicked, sends a HumanMessage like "Check job_123 and summarize the results."
There are two main ways to expose subagents as tools:
| Pattern | Best for | Trade-off |
|---|---|---|
| Tool per agent | Fine-grained control over each subagent's input/output | More setup, but more customization |
| Single dispatch tool | Many agents, distributed teams, convention over configuration | Simpler composition, less per-agent customization |
graph LR
A[User] --> B[Main Agent]
B --> C[Subagent A]
B --> D[Subagent B]
B --> E[Subagent C]
C --> B
D --> B
E --> B
B --> F[User response]
classDef trigger fill:#DCFCE7,stroke:#16A34A,stroke-width:2px,color:#14532D
classDef process fill:#DBEAFE,stroke:#2563EB,stroke-width:2px,color:#1E3A8A
class A,F trigger
class B,C,D,E process
The key idea is wrapping subagents as tools that the main agent can call:
:::python
from langchain.tools import tool
from langchain.agents import create_agent
# Create a sub-agent
subagent = create_agent(model="...", tools=[...]) # [!code highlight]
# Wrap it as a tool # [!code highlight]
@tool("subagent_name", description="subagent_description") # [!code highlight]
def call_subagent(query: str): # [!code highlight]
result = subagent.invoke({"messages": [{"role": "user", "content": query}]})
return result["messages"][-1].content
# Main agent with subagent as a tool # [!code highlight]
main_agent = create_agent(model="...", tools=[call_subagent]) # [!code highlight]::: :::js
import { createAgent, tool } from "langchain";
import * as z from "zod";
// Create a sub-agent
const subagent = createAgent({...}); // [!code highlight]
// Wrap it as a tool // [!code highlight]
const callSubagent = tool( // [!code highlight]
async ({ query }) => { // [!code highlight]
const result = await subagent.invoke({
messages: [{ role: "user", content: query }]
});
return result.messages.at(-1)?.text;
},
{
name: "subagent_name",
description: "subagent_description",
schema: z.object({
query: z.string().describe("The query to send to subagent")
})
}
);
// Main agent with subagent as a tool // [!code highlight]
const mainAgent = createAgent({ model, tools: [callSubagent] }); // [!code highlight]:::
The main agent invokes the subagent tool when it decides the task matches the subagent's description, receives the result, and continues orchestration. See Context engineering for fine-grained control.
An alternative approach uses a single parameterized tool to invoke ephemeral sub-agents for independent tasks. Unlike the tool per agent approach where each sub-agent is wrapped as a separate tool, this uses a convention-based approach with a single task tool: the task description is passed as a human message to the sub-agent, and the sub-agent's final message is returned as the tool result.
Use this approach when you want to distribute agent development across multiple teams, need to isolate complex tasks into separate context windows, need a scalable way to add new agents without modifying the coordinator, or prefer convention over customization. This approach trades flexibility in context engineering for simplicity in agent composition and strong context isolation.
graph LR
A[User] --> B[Main Agent]
B --> C{task<br/>agent_name, description}
C -->|research| D[Research Agent]
C -->|writer| E[Writer Agent]
C -->|reviewer| F[Reviewer Agent]
D --> C
E --> C
F --> C
C --> B
B --> G[User response]
classDef trigger fill:#DCFCE7,stroke:#16A34A,stroke-width:2px,color:#14532D
classDef process fill:#DBEAFE,stroke:#2563EB,stroke-width:2px,color:#1E3A8A
classDef decision fill:#FEF3C7,stroke:#F59E0B,stroke-width:2px,color:#78350F
class A,G trigger
class B,D,E,F process
class C decision
Key characteristics:
- Single task tool: One parameterized tool that can invoke any registered sub-agent by name
- Convention-based invocation: Agent selected by name, task passed as human message, final message returned as tool result
- Team distribution: Different teams can develop and deploy agents independently
- Agent discovery: Sub-agents can be discovered via system prompt (listing available agents) or through progressive disclosure (loading agent information on-demand via tools)
:::python
from langchain.tools import tool
from langchain.agents import create_agent
# Sub-agents developed by different teams
research_agent = create_agent(
model="gpt-5.4",
prompt="You are a research specialist..."
)
writer_agent = create_agent(
model="gpt-5.4",
prompt="You are a writing specialist..."
)
# Registry of available sub-agents
SUBAGENTS = {
"research": research_agent,
"writer": writer_agent,
}
@tool
def task(
agent_name: str,
description: str
) -> str:
"""Launch an ephemeral subagent for a task.
Available agents:
- research: Research and fact-finding
- writer: Content creation and editing
"""
agent = SUBAGENTS[agent_name]
result = agent.invoke({
"messages": [
{"role": "user", "content": description}
]
})
return result["messages"][-1].content
# Main coordinator agent
main_agent = create_agent(
model="gpt-5.4",
tools=[task],
system_prompt=(
"You coordinate specialized sub-agents. "
"Available: research (fact-finding), "
"writer (content creation). "
"Use the task tool to delegate work."
),
)::: :::js
import { tool, createAgent } from "langchain";
import * as z from "zod";
// Sub-agents developed by different teams
const researchAgent = createAgent({
model: "gpt-5.4",
prompt: "You are a research specialist...",
});
const writerAgent = createAgent({
model: "gpt-5.4",
prompt: "You are a writing specialist...",
});
// Registry of available sub-agents
const SUBAGENTS = {
research: researchAgent,
writer: writerAgent,
};
const task = tool(
async ({ agentName, description }) => {
const agent = SUBAGENTS[agentName];
const result = await agent.invoke({
messages: [
{ role: "user", content: description }
],
});
return result.messages.at(-1)?.content;
},
{
name: "task",
description: `Launch an ephemeral subagent.
Available agents:
- research: Research and fact-finding
- writer: Content creation and editing`,
schema: z.object({
agentName: z
.string()
.describe("Name of agent to invoke"),
description: z
.string()
.describe("Task description"),
}),
}
);
// Main coordinator agent
const mainAgent = createAgent({
model: "gpt-5.4",
tools: [task],
prompt: (
"You coordinate specialized sub-agents. " +
"Available: research (fact-finding), " +
"writer (content creation). " +
"Use the task tool to delegate work."
),
});:::
Control how context flows between the main agent and its subagents:
| Category | Purpose | Impacts |
|---|---|---|
| Subagent specs | Ensure subagents are invoked when they should be | Main agent routing decisions |
| Subagent inputs | Ensure subagents can execute well with optimized context | Subagent performance |
| Subagent outputs | Ensure the supervisor can act on subagent results | Main agent performance |
See also our comprehensive guide on context engineering for agents.
The names and descriptions associated with subagents are the primary way the main agent knows which subagents to invoke. These are prompting levers—choose them carefully.
- Name: How the main agent refers to the sub-agent. Keep it clear and action-oriented (e.g.,
research_agent,code_reviewer). - Description: What the main agent knows about the sub-agent's capabilities. Be specific about what tasks it handles and when to use it.
For the single dispatch tool design, you must additionally provide the main agent with information about the subagents it can invoke. You can provide this information in different ways based on the number of agents and whether your registry is static or dynamic:
| Method | Best for | Tradeoff |
|---|---|---|
| System prompt enumeration | Small, static agent lists (< 10 agents) | Simple, but requires prompt updates when agents change |
| Enum constraint | Small, static agent lists (< 10 agents) | Type-safe and explicit, but requires code changes when agents change |
| Tool-based discovery | Large or dynamic agent registries | Flexible and scalable, but adds complexity |
List available agents directly in the main agent's system prompt. The main agent sees the list of agents and their descriptions as part of its instructions.
When to use:
- You have a small, fixed set of agents (< 10)
- Agent registry rarely changes
- You want the simplest implementation
Example:
main_agent = create_agent(
model="...",
tools=[task],
system_prompt=(
"You coordinate specialized sub-agents. "
"Available agents:\n"
"- research: Research and fact-finding\n"
"- writer: Content creation and editing\n"
"- reviewer: Code and document review\n"
"Use the task tool to delegate work."
),
)Add an enum constraint to the agent_name parameter in your dispatch tool. This provides type safety and makes available agents explicit in the tool schema.
When to use:
- You have a small, fixed set of agents (< 10)
- You want type safety and explicit agent names
- You prefer schema-based validation over prompt-based guidance
Example:
from enum import Enum
class AgentName(str, Enum):
RESEARCH = "research"
WRITER = "writer"
REVIEWER = "reviewer"
@tool
def task(
agent_name: AgentName, # Enum constraint
description: str
) -> str:
"""Launch an ephemeral subagent for a task."""
# ...Provide a separate tool (e.g., list_agents or search_agents) that the main agent can call to discover available agents on-demand. This enables progressive disclosure and supports dynamic registries.
When to use:
- You have many agents (> 10) or a growing registry
- Agent registry changes frequently or is dynamic
- You want to reduce prompt size and token usage
- Different teams manage different agents independently
Example:
@tool
def list_agents(query: str = "") -> str:
"""List available subagents, optionally filtered by query."""
agents = search_agent_registry(query)
return format_agent_list(agents)
@tool
def task(agent_name: str, description: str) -> str:
"""Launch an ephemeral subagent for a task."""
# ...
main_agent = create_agent(
model="...",
tools=[task, list_agents],
system_prompt="Use list_agents to discover available subagents, then use task to invoke them."
)Customize what context the subagent receives to execute its task. Add input that isn't practical to capture in a static prompt—full message history, prior results, or task metadata—by pulling from the agent's state.
:::python
from langchain.agents import AgentState
from langchain.tools import tool, ToolRuntime
class CustomState(AgentState):
example_state_key: str
@tool(
"subagent1_name",
description="subagent1_description"
)
def call_subagent1(query: str, runtime: ToolRuntime[None, CustomState]):
# Apply any logic needed to transform the messages into a suitable input
subagent_input = some_logic(query, runtime.state["messages"])
result = subagent1.invoke({
"messages": subagent_input,
# You could also pass other state keys here as needed.
# Make sure to define these in both the main and subagent's
# state schemas.
"example_state_key": runtime.state["example_state_key"]
})
return result["messages"][-1].content::: :::js
import { createAgent, tool, AgentState, ToolMessage } from "langchain";
import { Command } from "@langchain/langgraph";
import * as z from "zod";
// Example of passing the full conversation history to the sub agent via the state.
const callSubagent1 = tool(
async ({query}) => {
const state = getCurrentTaskInput<AgentState>();
// Apply any logic needed to transform the messages into a suitable input
const subAgentInput = someLogic(query, state.messages);
const result = await subagent1.invoke({
messages: subAgentInput,
// You could also pass other state keys here as needed.
// Make sure to define these in both the main and subagent's
// state schemas.
exampleStateKey: state.exampleStateKey
});
return result.messages.at(-1)?.content;
},
{
name: "subagent1_name",
description: "subagent1_description",
}
);:::
Customize what the main agent receives back so it can make good decisions. Two strategies:
- Prompt the sub-agent: Specify exactly what should be returned. A common failure mode is that the sub-agent performs tool calls or reasoning but doesn't include results in its final message—remind it that the supervisor only sees the final output.
- Format in code: Adjust or enrich the response before returning it. For example, pass specific state keys back in addition to the final text using a
Command.
:::python
from typing import Annotated
from langchain.agents import AgentState
from langchain.tools import InjectedToolCallId
from langgraph.types import Command
@tool(
"subagent1_name",
description="subagent1_description"
)
def call_subagent1(
query: str,
tool_call_id: Annotated[str, InjectedToolCallId],
) -> Command:
result = subagent1.invoke({
"messages": [{"role": "user", "content": query}]
})
return Command(update={
# Pass back additional state from the subagent
"example_state_key": result["example_state_key"],
"messages": [
ToolMessage(
content=result["messages"][-1].content,
tool_call_id=tool_call_id
)
]
})::: :::js
import { tool, ToolMessage } from "langchain";
import { Command } from "@langchain/langgraph";
import * as z from "zod";
const callSubagent1 = tool(
async ({ query }, config) => {
const result = await subagent1.invoke({
messages: [{ role: "user", content: query }]
});
// Return a Command to update multiple state keys
return new Command({
update: {
// Pass back additional state from the subagent
exampleStateKey: result.exampleStateKey,
messages: [
new ToolMessage({
content: result.messages.at(-1)?.text,
tool_call_id: config.toolCall?.id!
})
]
}
});
},
{
name: "subagent1_name",
description: "subagent1_description",
schema: z.object({
query: z.string().describe("The query to send to subagent1")
})
}
);:::
By default, subagents use the inherited checkpointer mode—each invocation starts with fresh state, supports interrupts, and runs safely in parallel. If you need a subagent to maintain its own persistent conversation history across invocations, compile it with checkpointer=True (continuations mode). See subgraph persistence for a full comparison of modes.
Because subagents are called inside tool functions, LangGraph cannot statically discover them. This means get_state with subgraphs will not return subagent state. If you need to read nested graph state (e.g., during an interrupt), invoke the subagent from a node function in a custom graph instead. See subgraph persistence for details on how each mode affects state visibility.