| title | Agents |
|---|
Agents combine language models with tools to create systems that can reason about tasks, decide which tools to use, and iteratively work towards solutions.
:::python
@[create_agent] provides a production-ready agent implementation.
:::
:::js
createAgent() provides a production-ready agent implementation.
:::
An LLM Agent runs tools in a loop to achieve a goal. An agent runs until a stop condition is met - i.e., when the model emits a final output or an iteration limit is reached.
%%{
init: {
"fontFamily": "monospace",
"flowchart": {
"curve": "curve"
}
}
}%%
graph TD
%% Outside the agent
QUERY([input])
LLM{model}
TOOL(tools)
ANSWER([output])
%% Main flows (no inline labels)
QUERY --> LLM
LLM --"action"--> TOOL
TOOL --"observation"--> LLM
LLM --"finish"--> ANSWER
classDef blueHighlight fill:#DBEAFE,stroke:#2563EB,color:#1E3A8A;
classDef greenHighlight fill:#DCFCE7,stroke:#16A34A,color:#14532D;
class QUERY blueHighlight;
class ANSWER blueHighlight;
class LLM greenHighlight;
class TOOL greenHighlight;
:::python
@[create_agent] builds a graph-based agent runtime using LangGraph. A graph consists of nodes (steps) and edges (connections) that define how your agent processes information. The agent moves through this graph, executing nodes like the model node (which calls the model), the tools node (which executes tools), or middleware.
:::
:::js
createAgent() builds a graph-based agent runtime using LangGraph. A graph consists of nodes (steps) and edges (connections) that define how your agent processes information. The agent moves through this graph, executing nodes like the model node (which calls the model), the tools node (which executes tools), or middleware.
:::
Learn more about the Graph API.
The model is the reasoning engine of your agent. It can be specified in multiple ways, supporting both static and dynamic model selection.
Static models are configured once when creating the agent and remain unchanged throughout execution. This is the most common and straightforward approach.
To initialize a static model from a model identifier string:
:::python
from langchain.agents import create_agent
agent = create_agent("openai:gpt-5", tools=tools)::: :::js
import { createAgent } from "langchain";
const agent = createAgent({
model: "openai:gpt-5",
tools: []
});:::
:::python
Model identifier strings support automatic inference (e.g., "gpt-5" will be inferred as "openai:gpt-5"). Refer to the @[reference][init_chat_model(model)] to see a full list of model identifier string mappings.
For more control over the model configuration, initialize a model instance directly using the provider package. In this example, we use @[ChatOpenAI]. See Chat models for other available chat model classes.
from langchain.agents import create_agent
from langchain_openai import ChatOpenAI
model = ChatOpenAI(
model="gpt-5",
temperature=0.1,
max_tokens=1000,
timeout=30
# ... (other params)
)
agent = create_agent(model, tools=tools)Model instances give you complete control over configuration. Use them when you need to set specific parameters like temperature, max_tokens, timeouts, base_url, and other provider-specific settings. Refer to the reference to see available params and methods on your model.
:::
:::js
Model identifier strings use the format provider:model (e.g. "openai:gpt-5"). You may want more control over the model configuration, in which case you can initialize a model instance directly using the provider package:
import { createAgent } from "langchain";
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
model: "gpt-5.4",
temperature: 0.1,
maxTokens: 1000,
timeout: 30
});
const agent = createAgent({
model,
tools: []
});Model instances give you complete control over configuration. Use them when you need to set specific parameters like temperature, max_tokens, timeouts, or configure API keys, base_url, and other provider-specific settings. Refer to the API reference to see available params and methods on your model.
:::
Dynamic models are selected at runtime based on the current state and context. This enables sophisticated routing logic and cost optimization.
:::python
To use a dynamic model, create middleware using the @[@wrap_model_call] decorator that modifies the model in the request:
from langchain_openai import ChatOpenAI
from langchain.agents import create_agent
from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
basic_model = ChatOpenAI(model="gpt-5.4-mini")
advanced_model = ChatOpenAI(model="gpt-5.4")
@wrap_model_call
def dynamic_model_selection(request: ModelRequest, handler) -> ModelResponse:
"""Choose model based on conversation complexity."""
message_count = len(request.state["messages"])
if message_count > 10:
# Use an advanced model for longer conversations
model = advanced_model
else:
model = basic_model
return handler(request.override(model=model))
agent = create_agent(
model=basic_model, # Default model
tools=tools,
middleware=[dynamic_model_selection]
)::: :::js
To use a dynamic model, create middleware with wrapModelCall that modifies the model in the request:
import { ChatOpenAI } from "@langchain/openai";
import { createAgent, createMiddleware } from "langchain";
const basicModel = new ChatOpenAI({ model: "gpt-5.4-mini" });
const advancedModel = new ChatOpenAI({ model: "gpt-5.4" });
const dynamicModelSelection = createMiddleware({
name: "DynamicModelSelection",
wrapModelCall: (request, handler) => {
// Choose model based on conversation complexity
const messageCount = request.messages.length;
return handler({
...request,
model: messageCount > 10 ? advancedModel : basicModel,
});
},
});
const agent = createAgent({
model: "gpt-5.4-mini", // Base model (used when messageCount β€ 10)
tools,
middleware: [dynamicModelSelection],
});For more details on middleware and advanced patterns, see the middleware documentation. :::
For model configuration details, see [Models](/oss/langchain/models). For dynamic model selection patterns, see [Dynamic model in middleware](/oss/langchain/middleware#dynamic-model).Tools give agents the ability to take actions. Agents go beyond simple model-only tool binding by facilitating:
- Multiple tool calls in sequence (triggered by a single prompt)
- Parallel tool calls when appropriate
- Dynamic tool selection based on previous results
- Tool retry logic and error handling
- State persistence across tool calls
For more information, see Tools.
Static tools are defined when creating the agent and remain unchanged throughout execution. This is the most common and straightforward approach.
To define an agent with static tools, pass a list of the tools to the agent.
:::python
Tools can be specified as plain Python functions or coroutines.The tool decorator can be used to customize tool names, descriptions, argument schemas, and other properties.
from langchain.tools import tool
from langchain.agents import create_agent
@tool
def search(query: str) -> str:
"""Search for information."""
return f"Results for: {query}"
@tool
def get_weather(location: str) -> str:
"""Get weather information for a location."""
return f"Weather in {location}: Sunny, 72Β°F"
agent = create_agent(model, tools=[search, get_weather])::: :::js
import * as z from "zod";
import { createAgent, tool } from "langchain";
const search = tool(
({ query }) => `Results for: ${query}`,
{
name: "search",
description: "Search for information",
schema: z.object({
query: z.string().describe("The query to search for"),
}),
}
);
const getWeather = tool(
({ location }) => `Weather in ${location}: Sunny, 72Β°F`,
{
name: "get_weather",
description: "Get weather information for a location",
schema: z.object({
location: z.string().describe("The location to get weather for"),
}),
}
);
const agent = createAgent({
model: "gpt-5.4",
tools: [search, getWeather],
});:::
If an empty tool list is provided, the agent will consist of a single LLM node without tool-calling capabilities.
With dynamic tools, the set of tools available to the agent is modified at runtime rather than defined all upfront. Not every tool is appropriate for every situation. Too many tools may overwhelm the model (overload context) and increase errors; too few limit capabilities. Dynamic tool selection enables adapting the available toolset based on authentication state, user permissions, feature flags, or conversation stage.
There are two approaches depending on whether tools are known ahead of time:
When all possible tools are known at agent creation time, you can pre-register them and dynamically filter which ones are exposed to the model based on state, permissions, or context.
<Tabs>
<Tab title="State">
Enable advanced tools only after certain conversation milestones:
:::python
```python
from langchain.agents import create_agent
from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
from typing import Callable
@wrap_model_call
def state_based_tools(
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
"""Filter tools based on conversation State."""
# Read from State: check if user has authenticated
state = request.state
is_authenticated = state.get("authenticated", False)
message_count = len(state["messages"])
# Only enable sensitive tools after authentication
if not is_authenticated:
tools = [t for t in request.tools if t.name.startswith("public_")]
request = request.override(tools=tools)
elif message_count < 5:
# Limit tools early in conversation
tools = [t for t in request.tools if t.name != "advanced_search"]
request = request.override(tools=tools)
return handler(request)
agent = create_agent(
model="gpt-5.4",
tools=[public_search, private_search, advanced_search],
middleware=[state_based_tools]
)
```
:::
:::js
```typescript
import { createMiddleware, tool } from "langchain";
import { createDeepAgent } from "deepagents";
const stateBasedTools = createMiddleware({
name: "StateBasedTools",
wrapModelCall: (request, handler) => {
// Read from State: check authentication and conversation length
const state = request.state as typeof request.state & {
authenticated?: boolean;
};
const isAuthenticated = state.authenticated ?? false;
const messageCount = state.messages.length;
let filteredTools = request.tools;
// Only enable sensitive tools after authentication
if (!isAuthenticated) {
filteredTools = request.tools.filter(
(t: any) => typeof t.name === "string" && t.name.startsWith("public_"),
);
} else if (messageCount < 5) {
filteredTools = request.tools.filter(
(t: any) => typeof t.name === "string" && t.name !== "advanced_search",
);
}
return handler({ ...request, tools: filteredTools });
},
});
const agent = await createDeepAgent({
model: "claude-sonnet-4-20250514",
tools: tools,
middleware: [stateBasedTools] as any,
});
```
:::
</Tab>
<Tab title="Store">
Filter tools based on user preferences or feature flags in Store:
:::python
```python
from dataclasses import dataclass
from langchain.agents import create_agent
from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
from typing import Callable
from langgraph.store.memory import InMemoryStore
@dataclass
class Context:
user_id: str
@wrap_model_call
def store_based_tools(
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
"""Filter tools based on Store preferences."""
user_id = request.runtime.context.user_id
# Read from Store: get user's enabled features
store = request.runtime.store
feature_flags = store.get(("features",), user_id)
if feature_flags:
enabled_features = feature_flags.value.get("enabled_tools", [])
# Only include tools that are enabled for this user
tools = [t for t in request.tools if t.name in enabled_features]
request = request.override(tools=tools)
return handler(request)
agent = create_agent(
model="gpt-5.4",
tools=[search_tool, analysis_tool, export_tool],
middleware=[store_based_tools],
context_schema=Context,
store=InMemoryStore()
)
```
:::
:::js
```typescript
import { createMiddleware } from "langchain";
import { createDeepAgent, StoreBackend } from "deepagents";
import * as z from "zod";
import { InMemoryStore } from "@langchain/langgraph";
const contextSchema = z.object({
userId: z.string(),
});
const storeBasedTools = createMiddleware({
name: "StoreBasedTools",
contextSchema,
wrapModelCall: async (request, handler) => {
const userId =
(request.runtime?.context as { userId?: string } | undefined)?.userId ??
"user-123";
// Read from Store: get user's enabled features
const runtimeStore = request.runtime?.store as InMemoryStore | undefined;
const rawFlags = (await runtimeStore?.get(
["features"],
userId as string,
)) as unknown;
const featureFlags = rawFlags as FeatureFlags | undefined;
let filteredTools = request.tools;
if (featureFlags) {
const enabledFeatures = featureFlags.enabledTools || [];
filteredTools = request.tools.filter((t) =>
enabledFeatures.includes(t.name as string)
);
}
return handler({ ...request, tools: filteredTools });
},
});
const agent = await createDeepAgent({
model: "claude-sonnet-4-20250514",
backend: new StoreBackend(),
store,
checkpointer,
tools,
middleware: [storeBasedTools] as any,
});
```
:::
</Tab>
<Tab title="Runtime Context">
Filter tools based on user permissions from Runtime Context:
:::python
```python
from dataclasses import dataclass
from langchain.agents import create_agent
from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
from typing import Callable
@dataclass
class Context:
user_role: str
@wrap_model_call
def context_based_tools(
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
"""Filter tools based on Runtime Context permissions."""
# Read from Runtime Context: get user role
if request.runtime is None or request.runtime.context is None:
# If no context provided, default to viewer (most restrictive)
user_role = "viewer"
else:
user_role = request.runtime.context.user_role
if user_role == "admin":
# Admins get all tools
pass
elif user_role == "editor":
# Editors can't delete
tools = [t for t in request.tools if t.name != "delete_data"]
request = request.override(tools=tools)
else:
# Viewers get read-only tools
tools = [t for t in request.tools if t.name.startswith("read_")]
request = request.override(tools=tools)
return handler(request)
agent = create_agent(
model="gpt-5.4",
tools=[read_data, write_data, delete_data],
middleware=[context_based_tools],
context_schema=Context
)
```
:::
:::js
```typescript
import * as z from "zod";
import { createMiddleware } from "langchain";
import { createDeepAgent } from "deepagents";
const contextSchema = z.object({
userRole: z.string(),
});
const contextBasedTools = createMiddleware({
name: "ContextBasedTools",
contextSchema,
wrapModelCall: (request, handler) => {
// Read from Runtime Context: get user role
const userRole = request.runtime.context.userRole;
let filteredTools = request.tools;
if (userRole === "admin") {
// Admins get all tools
} else if (userRole === "editor") {
filteredTools = request.tools.filter((t) => t.name !== "delete_data");
} else {
filteredTools = request.tools.filter(
(t) => (t.name as string).startsWith("read_"),
);
}
return handler({ ...request, tools: filteredTools });
},
});
const agent = await createDeepAgent({
model: "claude-sonnet-4-20250514",
store,
checkpointer,
tools,
middleware: [contextBasedTools] as any,
});
```
:::
</Tab>
</Tabs>
This approach is best when:
- All possible tools are known at compile/startup time
- You want to filter based on permissions, feature flags, or conversation state
- Tools are static but their availability is dynamic
See [Dynamically selecting tools](/oss/langchain/middleware/custom#dynamically-selecting-tools) for more examples.
When tools are discovered or created at runtime (e.g., loaded from an MCP server, generated based on user data, or fetched from a remote registry), you need to both register the tools and handle their execution dynamically.
This requires two middleware hooks:
1. `wrap_model_call` - Add the dynamic tools to the request
2. `wrap_tool_call` - Handle execution of the dynamically added tools
:::python
```python
from langchain.tools import tool
from langchain.agents import create_agent
from langchain.agents.middleware import AgentMiddleware, ModelRequest, ToolCallRequest
# A tool that will be added dynamically at runtime
@tool
def calculate_tip(bill_amount: float, tip_percentage: float = 20.0) -> str:
"""Calculate the tip amount for a bill."""
tip = bill_amount * (tip_percentage / 100)
return f"Tip: ${tip:.2f}, Total: ${bill_amount + tip:.2f}"
class DynamicToolMiddleware(AgentMiddleware):
"""Middleware that registers and handles dynamic tools."""
def wrap_model_call(self, request: ModelRequest, handler):
# Add dynamic tool to the request
# This could be loaded from an MCP server, database, etc.
updated = request.override(tools=[*request.tools, calculate_tip])
return handler(updated)
def wrap_tool_call(self, request: ToolCallRequest, handler):
# Handle execution of the dynamic tool
if request.tool_call["name"] == "calculate_tip":
return handler(request.override(tool=calculate_tip))
return handler(request)
agent = create_agent(
model="gpt-4o",
tools=[get_weather], # Only static tools registered here
middleware=[DynamicToolMiddleware()],
)
# The agent can now use both get_weather AND calculate_tip
result = agent.invoke({
"messages": [{"role": "user", "content": "Calculate a 20% tip on $85"}]
})
```
:::
:::js
```typescript
import { createAgent, createMiddleware, tool } from "langchain";
import * as z from "zod";
// A tool that will be added dynamically at runtime
const calculateTip = tool(
({ billAmount, tipPercentage = 20 }) => {
const tip = billAmount * (tipPercentage / 100);
return `Tip: $${tip.toFixed(2)}, Total: $${(billAmount + tip).toFixed(2)}`;
},
{
name: "calculate_tip",
description: "Calculate the tip amount for a bill",
schema: z.object({
billAmount: z.number().describe("The bill amount"),
tipPercentage: z.number().default(20).describe("Tip percentage"),
}),
}
);
const dynamicToolMiddleware = createMiddleware({
name: "DynamicToolMiddleware",
wrapModelCall: (request, handler) => {
// Add dynamic tool to the request
// This could be loaded from an MCP server, database, etc.
return handler({
...request,
tools: [...request.tools, calculateTip],
});
},
wrapToolCall: (request, handler) => {
// Handle execution of the dynamic tool
if (request.toolCall.name === "calculate_tip") {
return handler({ ...request, tool: calculateTip });
}
return handler(request);
},
});
const agent = createAgent({
model: "gpt-4o",
tools: [getWeather], // Only static tools registered here
middleware: [dynamicToolMiddleware],
});
// The agent can now use both getWeather AND calculateTip
const result = await agent.invoke({
messages: [{ role: "user", content: "Calculate a 20% tip on $85" }],
});
```
:::
This approach is best when:
- Tools are discovered at runtime (e.g., from an MCP server)
- Tools are generated dynamically based on user data or configuration
- You're integrating with external tool registries
<Note>
The `wrap_tool_call` hook is required for runtime-registered tools because the agent needs to know how to execute tools that weren't in the original tool list. Without it, the agent won't know how to invoke the dynamically added tool.
</Note>
:::python
To customize how tool errors are handled, use the @[@wrap_tool_call] decorator to create middleware:
from langchain.agents import create_agent
from langchain.agents.middleware import wrap_tool_call
from langchain.messages import ToolMessage
@wrap_tool_call
def handle_tool_errors(request, handler):
"""Handle tool execution errors with custom messages."""
try:
return handler(request)
except Exception as e:
# Return a custom error message to the model
return ToolMessage(
content=f"Tool error: Please check your input and try again. ({str(e)})",
tool_call_id=request.tool_call["id"]
)
agent = create_agent(
model="gpt-5.4",
tools=[search, get_weather],
middleware=[handle_tool_errors]
)The agent will return a @[ToolMessage] with the custom error message when a tool fails:
[
...
ToolMessage(
content="Tool error: Please check your input and try again. (division by zero)",
tool_call_id="..."
),
...
]::: :::js
To customize how tool errors are handled, use the wrapToolCall hook in a custom middleware:
import { createAgent, createMiddleware, ToolMessage } from "langchain";
const handleToolErrors = createMiddleware({
name: "HandleToolErrors",
wrapToolCall: async (request, handler) => {
try {
return await handler(request);
} catch (error) {
// Return a custom error message to the model
return new ToolMessage({
content: `Tool error: Please check your input and try again. (${error})`,
tool_call_id: request.toolCall.id!,
});
}
},
});
const agent = createAgent({
model: "gpt-5.4",
tools: [
/* ... */
],
middleware: [handleToolErrors],
});The agent will return a @[ToolMessage] with the custom error message when a tool fails.
:::
Agents follow the ReAct ("Reasoning + Acting") pattern, alternating between brief reasoning steps with targeted tool calls and feeding the resulting observations into subsequent decisions until they can deliver a final answer.
**Prompt:** Identify the current most popular wireless headphones and verify availability.================================ Human Message =================================
Find the most popular wireless headphones right now and check if they're in stock
- Reasoning: "Popularity is time-sensitive, I need to use the provided search tool."
- Acting: Call
search_products("wireless headphones")
================================== Ai Message ==================================
Tool Calls:
search_products (call_abc123)
Call ID: call_abc123
Args:
query: wireless headphones
================================= Tool Message =================================
Found 5 products matching "wireless headphones". Top 5 results: WH-1000XM5, ...
- Reasoning: "I need to confirm availability for the top-ranked item before answering."
- Acting: Call
check_inventory("WH-1000XM5")
================================== Ai Message ==================================
Tool Calls:
check_inventory (call_def456)
Call ID: call_def456
Args:
product_id: WH-1000XM5
================================= Tool Message =================================
Product WH-1000XM5: 10 units in stock
- Reasoning: "I have the most popular model and its stock status. I can now answer the user's question."
- Acting: Produce final answer
================================== Ai Message ==================================
I found wireless headphones (model WH-1000XM5) with 10 units in stock...
:::python
You can shape how your agent approaches tasks by providing a prompt. The @[system_prompt] parameter can be provided as a string:
:::
:::js
You can shape how your agent approaches tasks by providing a prompt. The systemPrompt parameter can be provided as a string:
:::
:::python
agent = create_agent(
model,
tools,
system_prompt="You are a helpful assistant. Be concise and accurate."
)::: :::js
const agent = createAgent({
model,
tools,
systemPrompt: "You are a helpful assistant. Be concise and accurate.",
});:::
:::python
When no @[system_prompt] is provided, the agent will infer its task from the messages directly.
The @[system_prompt] parameter accepts either a str or a @[SystemMessage]. Using a SystemMessage gives you more control over the prompt structure, which is useful for provider-specific features like Anthropic's prompt caching:
from langchain.agents import create_agent
from langchain.messages import SystemMessage, HumanMessage
literary_agent = create_agent(
model="google_genai:gemini-3.1-pro-preview",
system_prompt=SystemMessage(
content=[
{
"type": "text",
"text": "You are an AI assistant tasked with analyzing literary works.",
},
{
"type": "text",
"text": "<the entire contents of 'Pride and Prejudice'>",
"cache_control": {"type": "ephemeral"}
}
]
)
)
result = literary_agent.invoke(
{"messages": [HumanMessage("Analyze the major themes in 'Pride and Prejudice'.")]}
)The cache_control field with {"type": "ephemeral"} tells Anthropic to cache that content block, reducing latency and costs for repeated requests that use the same system prompt.
:::
:::js
When no systemPrompt is provided, the agent will infer its task from the messages directly.
The systemPrompt parameter accepts either a string or a SystemMessage. Using a SystemMessage gives you more control over the prompt structure, which is useful for provider-specific features like Anthropic's prompt caching:
import { createAgent } from "langchain";
import { SystemMessage, HumanMessage } from "@langchain/core/messages";
const literaryAgent = createAgent({
model: "google_genai:gemini-3.1-pro-preview",
systemPrompt: new SystemMessage({
content: [
{
type: "text",
text: "You are an AI assistant tasked with analyzing literary works.",
},
{
type: "text",
text: "<the entire contents of 'Pride and Prejudice'>",
cache_control: { type: "ephemeral" }
}
]
})
});
const result = await literaryAgent.invoke({
messages: [new HumanMessage("Analyze the major themes in 'Pride and Prejudice'.")]
});The cache_control field with { type: "ephemeral" } tells Anthropic to cache that content block, reducing latency and costs for repeated requests that use the same system prompt.
:::
For more advanced use cases where you need to modify the system prompt based on runtime context or agent state, you can use middleware.
:::python
The @[@dynamic_prompt] decorator creates middleware that generates system prompts based on the model request:
from typing import TypedDict
from langchain.agents import create_agent
from langchain.agents.middleware import dynamic_prompt, ModelRequest
class Context(TypedDict):
user_role: str
@dynamic_prompt
def user_role_prompt(request: ModelRequest) -> str:
"""Generate system prompt based on user role."""
user_role = request.runtime.context.get("user_role", "user")
base_prompt = "You are a helpful assistant."
if user_role == "expert":
return f"{base_prompt} Provide detailed technical responses."
elif user_role == "beginner":
return f"{base_prompt} Explain concepts simply and avoid jargon."
return base_prompt
agent = create_agent(
model="gpt-5.4",
tools=[web_search],
middleware=[user_role_prompt],
context_schema=Context
)
# The system prompt will be set dynamically based on context
result = agent.invoke(
{"messages": [{"role": "user", "content": "Explain machine learning"}]},
context={"user_role": "expert"}
):::
:::js
import * as z from "zod";
import { createAgent, dynamicSystemPromptMiddleware } from "langchain";
const contextSchema = z.object({
userRole: z.enum(["expert", "beginner"]),
});
const agent = createAgent({
model: "gpt-5.4",
tools: [/* ... */],
contextSchema,
middleware: [
dynamicSystemPromptMiddleware<z.infer<typeof contextSchema>>((state, runtime) => {
const userRole = runtime.context.userRole || "user";
const basePrompt = "You are a helpful assistant.";
if (userRole === "expert") {
return `${basePrompt} Provide detailed technical responses.`;
} else if (userRole === "beginner") {
return `${basePrompt} Explain concepts simply and avoid jargon.`;
}
return basePrompt;
}),
],
});
// The system prompt will be set dynamically based on context
const result = await agent.invoke(
{ messages: [{ role: "user", content: "Explain machine learning" }] },
{ context: { userRole: "expert" } }
);:::
For more details on message types and formatting, see [Messages](/oss/langchain/messages). For comprehensive middleware documentation, see [Middleware](/oss/langchain/middleware).:::python
Set an optional @[name][create_agent(name)] for the agent. This is used as the node identifier when adding the agent as a subgraph in multi-agent systems:
agent = create_agent(
model,
tools,
name="research_assistant"
):::
:::js
Set an optional name for the agent. This is used as the node identifier when adding the agent as a subgraph in multi-agent systems:
const agent = createAgent({
model,
tools,
name: "research_assistant",
});:::
Prefer `snake_case` for agent names (e.g., `research_assistant` instead of `Research Assistant`). Some model providers reject names containing spaces or special characters with errors. Using alphanumeric characters, underscores, and hyphens only ensures compatibility across all providers. The same applies to [tool names](/oss/langchain/tools).:::python
You can invoke an agent by passing an update to its State. All agents include a sequence of messages in their state; to invoke the agent, pass a new message:
:::
:::js
You can invoke an agent by passing an update to its State. All agents include a sequence of messages in their state; to invoke the agent, pass a new message:
:::
:::python
result = agent.invoke(
{"messages": [{"role": "user", "content": "What's the weather in San Francisco?"}]}
)::: :::js
await agent.invoke({
messages: [{ role: "user", content: "What's the weather in San Francisco?" }],
}):::
For streaming steps and / or tokens from the agent, refer to the streaming guide.
Otherwise, the agent follows the LangGraph Graph API and supports all associated methods, such as stream and invoke.
:::python
In some situations, you may want the agent to return an output in a specific format. LangChain provides strategies for structured output via the @[response_format][create_agent(response_format)] parameter.
ToolStrategy uses artificial tool calling to generate structured output. This works with any model that supports tool calling. ToolStrategy should be used when provider-native structured output (via ProviderStrategy) is not available or reliable.
from pydantic import BaseModel
from langchain.agents import create_agent
from langchain.agents.structured_output import ToolStrategy
class ContactInfo(BaseModel):
name: str
email: str
phone: str
agent = create_agent(
model="gpt-5.4-mini",
tools=[search_tool],
response_format=ToolStrategy(ContactInfo)
)
result = agent.invoke({
"messages": [{"role": "user", "content": "Extract contact info from: John Doe, john@example.com, (555) 123-4567"}]
})
result["structured_response"]
# ContactInfo(name='John Doe', email='john@example.com', phone='(555) 123-4567')ProviderStrategy uses the model provider's native structured output generation. This is more reliable but only works with providers that support native structured output:
from langchain.agents.structured_output import ProviderStrategy
agent = create_agent(
model="gpt-5.4",
response_format=ProviderStrategy(ContactInfo)
):::
:::js
In some situations, you may want the agent to return an output in a specific format. LangChain provides a simple, universal way to do this with the responseFormat parameter.
import * as z from "zod";
import { createAgent } from "langchain";
const ContactInfo = z.object({
name: z.string(),
email: z.string(),
phone: z.string(),
});
const agent = createAgent({
model: "gpt-5.4",
responseFormat: ContactInfo,
});
const result = await agent.invoke({
messages: [
{
role: "user",
content: "Extract contact info from: John Doe, john@example.com, (555) 123-4567",
},
],
});
console.log(result.structuredResponse);
// {
// name: 'John Doe',
// email: 'john@example.com',
// phone: '(555) 123-4567'
// }::: To learn about structured output, see Structured output.
Agents maintain conversation history automatically through the message state. You can also configure the agent to use a custom state schema to remember additional information during the conversation.
Information stored in the state can be thought of as the short-term memory of the agent:
:::python
Custom state schemas must extend @[AgentState] as a TypedDict.
There are two ways to define custom state:
- Via middleware (preferred)
- Via @[
state_schema] on @[create_agent]
Use middleware to define custom state when your custom state needs to be accessed by specific middleware hooks and tools attached to said middleware.
from langchain.agents import AgentState
from langchain.agents.middleware import AgentMiddleware
from typing import Any
class CustomState(AgentState):
user_preferences: dict
class CustomMiddleware(AgentMiddleware):
state_schema = CustomState
tools = [tool1, tool2]
def before_model(self, state: CustomState, runtime) -> dict[str, Any] | None:
...
agent = create_agent(
model,
tools=tools,
middleware=[CustomMiddleware()]
)
# The agent can now track additional state beyond messages
result = agent.invoke({
"messages": [{"role": "user", "content": "I prefer technical explanations"}],
"user_preferences": {"style": "technical", "verbosity": "detailed"},
})Use the @[state_schema] parameter as a shortcut to define custom state that is only used in tools.
from langchain.agents import AgentState
class CustomState(AgentState):
user_preferences: dict
agent = create_agent(
model,
tools=[tool1, tool2],
state_schema=CustomState
)
# The agent can now track additional state beyond messages
result = agent.invoke({
"messages": [{"role": "user", "content": "I prefer technical explanations"}],
"user_preferences": {"style": "technical", "verbosity": "detailed"},
})::: :::js
import { z } from "zod/v4";
import { StateSchema, MessagesValue } from "@langchain/langgraph";
import { createAgent } from "langchain";
const CustomAgentState = new StateSchema({
messages: MessagesValue,
userPreferences: z.record(z.string(), z.string()),
});
const customAgent = createAgent({
model: "gpt-5.4",
tools: [],
stateSchema: CustomAgentState,
});:::
:::python
Defining custom state via middleware is preferred over defining it via @[state_schema] on @[create_agent] because it allows you to keep state extensions conceptually scoped to the relevant middleware and tools.
@[`state_schema`] is still supported for backwards compatibility on @[`create_agent`].
We've seen how the agent can be called with invoke to get a final response. If the agent executes multiple steps, this may take a while. To show intermediate progress, we can stream back messages as they occur.
:::python
from langchain.messages import AIMessage, HumanMessage
for chunk in agent.stream({
"messages": [{"role": "user", "content": "Search for AI news and summarize the findings"}]
}, stream_mode="values"):
# Each chunk contains the full state at that point
latest_message = chunk["messages"][-1]
if latest_message.content:
if isinstance(latest_message, HumanMessage):
print(f"User: {latest_message.content}")
elif isinstance(latest_message, AIMessage):
print(f"Agent: {latest_message.content}")
elif latest_message.tool_calls:
print(f"Calling tools: {[tc['name'] for tc in latest_message.tool_calls]}")::: :::js
const stream = await agent.stream(
{
messages: [{
role: "user",
content: "Search for AI news and summarize the findings"
}],
},
{ streamMode: "values" }
);
for await (const chunk of stream) {
// Each chunk contains the full state at that point
const latestMessage = chunk.messages.at(-1);
if (latestMessage?.content) {
console.log(`Agent: ${latestMessage.content}`);
} else if (latestMessage?.tool_calls) {
const toolCallNames = latestMessage.tool_calls.map((tc) => tc.name);
console.log(`Calling tools: ${toolCallNames.join(", ")}`);
}
}:::
For more details on streaming, see [Streaming](/oss/langchain/streaming).Middleware provides powerful extensibility for customizing agent behavior at different stages of execution. You can use middleware to:
- Process state before the model is called (e.g., message trimming, context injection)
- Modify or validate the model's response (e.g., guardrails, content filtering)
- Handle tool execution errors with custom logic
- Implement dynamic model selection based on state or context
- Add custom logging, monitoring, or analytics
Middleware integrates seamlessly into the agent's execution, allowing you to intercept and modify data flow at key points without changing the core agent logic.
:::python
For comprehensive middleware documentation including decorators like @[@before_model], @[@after_model], and @[@wrap_tool_call], see Middleware.
:::
:::js
For comprehensive middleware documentation including hooks like beforeModel, afterModel, and wrapToolCall, see Middleware.
:::