diff --git a/src/oss/langchain/philosophy.mdx b/src/oss/langchain/philosophy.mdx
index ee0bbaa502..a257dd3540 100644
--- a/src/oss/langchain/philosophy.mdx
+++ b/src/oss/langchain/philosophy.mdx
@@ -1,131 +1,139 @@
----
-title: Philosophy
-description: LangChain exists to be the easiest place to start building with LLMs, while also being flexible and production-ready.
-mode: wide
----
-
-LangChain is driven by a few core beliefs:
-
-- Large Language Models (LLMs) are great, powerful new technology.
-- LLMs are even better when you combine them with external sources of data.
-- LLMs will transform what the applications of the future look like. Specifically, the applications of the future will look more and more agentic.
-- It is still very early on in that transformation.
-- While it's easy to build a prototype of those agentic applications, it's still really hard to build agents that are reliable enough to put into production.
-
-With LangChain, we have two core focuses:
-
-
-
- Different providers expose different APIs, with different model parameters and different message formats.
- Standardizing these model inputs and outputs is a core focus, making it easy for developer to easily change to the most recent state-of-the-art model, avoiding lock-in.
-
-
- Models should be used for more than just *text generation* - they should also be used to orchestrate more complex flows that interact with other data. LangChain makes it easy to define [tools](/oss/langchain/tools) that LLMs can use dynamically, as well as help with parsing of and access to unstructured data.
-
-
-
-## History
-
-Given the constant rate of change in the field, LangChain has also evolved over time. Below is a brief timeline of how LangChain has changed over the years, evolving alongside what it means to build with LLMs:
-
-
- A month before ChatGPT, **LangChain was launched as a Python package**. It consisted of two main components:
-
- - LLM abstractions
- - "Chains", or predetermined steps of computation to run, for common use cases. For example - RAG: run a retrieval step, then run a generation step.
-
- The name LangChain comes from "Language" (like Language models) and "Chains".
-
-
-
- The first general purpose agents were added to LangChain.
-
- These general purpose agents were based on the [ReAct paper](https://arxiv.org/abs/2210.03629) (ReAct standing for Reasoning and Acting). They used LLMs to generate JSON that represented tool calls, and then parsed that JSON to determine what tools to call.
-
-
-
- OpenAI releases a 'Chat Completion' API.
-
- Previously, models took in strings and returned a string. In the ChatCompletions API, they evolved to take in a list of messages and return a message. Other model providers followed suit, and LangChain updated to work with lists of messages.
-
-
-
- LangChain releases a JavaScript version.
-
- LLMs and agents will change how applications are built and JavaScript is the language of application developers.
-
-
-
- **LangChain Inc. was formed as a company** around the open source LangChain project.
-
- The main goal was to "make intelligent agents ubiquitous". The team recognized that while LangChain was a key part (LangChain made it simple to get started with LLMs), there was also a need for other components.
-
-
-
- OpenAI releases 'function calling' in their API.
-
- This allowed the API to explicitly generate payloads that represented tool calls. Other model providers followed suit, and LangChain was updated to use this as the preferred method for tool calling (rather than parsing JSON).
-
-
-
- **LangSmith is released** as closed source platform by LangChain Inc., providing observability and evals.
-
- The main issue with building agents is getting them to be reliable, and LangSmith, which provides observability and evals, was built to solve that need. LangChain was updated to integrate seamlessly with LangSmith.
-
-
-
- **LangChain releases 0.1.0**, its first non-0.0.x.
-
- The industry matured from prototypes to production, and as such, LangChain increased its focus on stability.
-
-
-
- **LangGraph is released** as an open-source library.
-
- The original LangChain had two focuses: LLM abstractions, and high-level interfaces for getting started with common applications; however, it was missing a low-level orchestration layer that allowed developers to control the exact flow of their agent. Enter: LangGraph.
-
- When building LangGraph, we learned from lessons when building LangChain and added functionality we discovered was needed: streaming, durable execution, short-term memory, human-in-the-loop, and more.
-
-
-
- **LangChain has over 700 integrations.**
-
- :::python
- Integrations were split out of the core LangChain package, and either moved into their own standalone packages (for the core integrations) or `langchain-community`.
- :::
- :::js
- Integrations were split out of the core LangChain package, and either moved into their own standalone packages (for the core integrations) or `@langchain/community`.
- :::
-
-
-
- LangGraph becomes the preferred way to build any AI application that is more than a single LLM call.
-
- As developers tried to improve the reliability of their applications, they needed more control than the high-level interfaces provided. LangGraph provided that low-level flexibility. Most chains and agents were marked as deprecated in LangChain with guides on how to migrate them to LangGraph. There is still one high-level abstraction created in LangGraph: an agent abstraction. It is built on top of low-level LangGraph and has the same interface as the ReAct agents from LangChain.
-
-
-
- Model APIs become more multimodal.
-
- :::python
- Models started to accept files, images, videos, and more. We updated the `langchain-core` message format accordingly to allow developers to specify these multimodal inputs in a standard way.
- :::
- :::js
- Models started to accept files, images, videos, and more. We updated the `@langchain/core` message format accordingly to allow developers to specify these multimodal inputs in a standard way.
- :::
-
-
-
- **LangChain releases 1.0** with two major changes:
-
- 1. Complete revamp of all chains and agents in `langchain`. All chains and agents are now replaced with only one high level abstraction: an agent abstraction built on top of LangGraph. This was the high-level abstraction that was originally created in LangGraph, but just moved to LangChain.
-
- :::python
- For users still using old LangChain chains/agents who do NOT want to upgrade (note: we recommend you do), you can continue using old LangChain by installing the `langchain-classic` package.
- :::
- :::js
- For users still using old LangChain chains/agents who do NOT want to upgrade (note: we recommend you do), you can continue using old LangChain by installing the `@langchain/classic` package.
- :::
-
- 2. A standard message content format: Model APIs evolved from returning messages with a simple content string to more complex output types - reasoning blocks, citations, server-side tool calls, etc. LangChain evolved its message formats to standardize these across providers.
-
+---
+title: Philosophy
+description: LangChain exists to be the easiest place to start building with LLMs, while also being flexible and production-ready.
+mode: wide
+---
+
+LangChain is driven by a few core beliefs:
+
+- Large Language Models (LLMs) are great, powerful new technology.
+- LLMs are even better when you combine them with external sources of data.
+- LLMs will transform what the applications of the future look like. Specifically, the applications of the future will look more and more agentic.
+- It is still very early on in that transformation.
+- While it's easy to build a prototype of those agentic applications, it's still really hard to build agents that are reliable enough to put into production.
+
+Today developers can choose how they build agents: use [LangChain](/oss/langchain/overview) for maximum flexibility and control, or [Deep Agents](/oss/langchain/overview) which allows for similar flexibility and control but comes with opinionated built-in planning, filesystem tools, subagents, and context management. Both are built on [LangGraph](/oss/langgraph/overview).
+
+With LangChain, we have two core focuses:
+
+
+
+ Different providers expose different APIs, with different model parameters and different message formats.
+ Standardizing these model inputs and outputs is a core focus, making it easy for developer to easily change to the most recent state-of-the-art model, avoiding lock-in.
+
+
+ Models should be used for more than just *text generation* - they should also be used to orchestrate more complex flows that interact with other data. LangChain makes it easy to define [tools](/oss/langchain/tools) that LLMs can use dynamically, as well as help with parsing of and access to unstructured data.
+
+
+
+## History
+
+Given the constant rate of change in the field, LangChain has also evolved over time. Below is a brief timeline of how LangChain has changed over the years, evolving alongside what it means to build with LLMs:
+
+
+ A month before ChatGPT, **LangChain was launched as a Python package**. It consisted of two main components:
+
+ - LLM abstractions
+ - "Chains", or predetermined steps of computation to run, for common use cases. For example - RAG: run a retrieval step, then run a generation step.
+
+ The name LangChain comes from "Language" (like Language models) and "Chains".
+
+
+
+ The first general purpose agents were added to LangChain.
+
+ These general purpose agents were based on the [ReAct paper](https://arxiv.org/abs/2210.03629) (ReAct standing for Reasoning and Acting). They used LLMs to generate JSON that represented tool calls, and then parsed that JSON to determine what tools to call.
+
+
+
+ OpenAI releases a 'Chat Completion' API.
+
+ Previously, models took in strings and returned a string. In the ChatCompletions API, they evolved to take in a list of messages and return a message. Other model providers followed suit, and LangChain updated to work with lists of messages.
+
+
+
+ LangChain releases a JavaScript version.
+
+ LLMs and agents will change how applications are built and JavaScript is the language of application developers.
+
+
+
+ **LangChain Inc. was formed as a company** around the open source LangChain project.
+
+ The main goal was to "make intelligent agents ubiquitous". The team recognized that while LangChain was a key part (LangChain made it simple to get started with LLMs), there was also a need for other components.
+
+
+
+ OpenAI releases 'function calling' in their API.
+
+ This allowed the API to explicitly generate payloads that represented tool calls. Other model providers followed suit, and LangChain was updated to use this as the preferred method for tool calling (rather than parsing JSON).
+
+
+
+ **LangSmith is released** as closed source platform by LangChain Inc., providing observability and evals.
+
+ The main issue with building agents is getting them to be reliable, and LangSmith, which provides observability and evals, was built to solve that need. LangChain was updated to integrate seamlessly with LangSmith.
+
+
+
+ **LangChain releases 0.1.0**, its first non-0.0.x.
+
+ The industry matured from prototypes to production, and as such, LangChain increased its focus on stability.
+
+
+
+ **LangGraph is released** as an open-source library.
+
+ The original LangChain had two focuses: LLM abstractions, and high-level interfaces for getting started with common applications; however, it was missing a low-level orchestration layer that allowed developers to control the exact flow of their agent. Enter: LangGraph.
+
+ When building LangGraph, we learned from lessons when building LangChain and added functionality we discovered was needed: streaming, durable execution, short-term memory, human-in-the-loop, and more.
+
+
+
+ **LangChain has over 700 integrations.**
+
+ :::python
+ Integrations were split out of the core LangChain package, and either moved into their own standalone packages (for the core integrations) or `langchain-community`.
+ :::
+ :::js
+ Integrations were split out of the core LangChain package, and either moved into their own standalone packages (for the core integrations) or `@langchain/community`.
+ :::
+
+
+
+ LangGraph becomes the preferred way to build any AI application that is more than a single LLM call.
+
+ As developers tried to improve the reliability of their applications, they needed more control than the high-level interfaces provided. LangGraph provided that low-level flexibility. Most chains and agents were marked as deprecated in LangChain with guides on how to migrate them to LangGraph. There is still one high-level abstraction created in LangGraph: an agent abstraction. It is built on top of low-level LangGraph and has the same interface as the ReAct agents from LangChain.
+
+
+
+ Model APIs become more multimodal.
+
+ :::python
+ Models started to accept files, images, videos, and more. We updated the `langchain-core` message format accordingly to allow developers to specify these multimodal inputs in a standard way.
+ :::
+ :::js
+ Models started to accept files, images, videos, and more. We updated the `@langchain/core` message format accordingly to allow developers to specify these multimodal inputs in a standard way.
+ :::
+
+
+
+ **LangChain releases 1.0** with two major changes:
+
+ 1. Complete revamp of all chains and agents in `langchain`. All chains and agents are now replaced with only one high level abstraction: an agent abstraction built on top of LangGraph. This was the high-level abstraction that was originally created in LangGraph, but just moved to LangChain.
+
+ :::python
+ For users still using old LangChain chains/agents who do NOT want to upgrade (note: we recommend you do), you can continue using old LangChain by installing the `langchain-classic` package.
+ :::
+ :::js
+ For users still using old LangChain chains/agents who do NOT want to upgrade (note: we recommend you do), you can continue using old LangChain by installing the `@langchain/classic` package.
+ :::
+
+ 2. A standard message content format: Model APIs evolved from returning messages with a simple content string to more complex output types - reasoning blocks, citations, server-side tool calls, etc. LangChain evolved its message formats to standardize these across providers.
+
+
+
+ **Deep Agents is released** as an open-source agent harness built on LangGraph.
+
+ While LangChain provides flexible building blocks for custom agent architectures, [Deep Agents](/oss/langchain/overview) offers a batteries-included option for complex, long-running tasks like research and coding. It adds built-in planning tools, a virtual filesystem with pluggable backends (in-memory, disk, LangGraph store, sandboxes), and subagent spawning for context isolation. Use Deep Agents for more autonomous agents with predefined tools; use LangChain for full control over your agent architecture.
+
diff --git a/src/oss/langchain/quickstart.mdx b/src/oss/langchain/quickstart.mdx
index c7691cb18e..b945d4f52d 100644
--- a/src/oss/langchain/quickstart.mdx
+++ b/src/oss/langchain/quickstart.mdx
@@ -1,8 +1,9 @@
---
title: Quickstart
+description: Build your first agent in minutes
---
-This quickstart takes you from a simple setup to a fully functional AI agent in just a few minutes.
+This quickstart shows you how to create a fully functional AI agent in just a few minutes.
**Using an AI coding assistant?**
@@ -11,86 +12,595 @@ This quickstart takes you from a simple setup to a fully functional AI agent in
- Install [LangChain Skills](https://github.com/langchain-ai/langchain-skills) to improve your agent's performance on LangChain ecosystem tasks.
-## Requirements
+## Install dependencies
-For these examples, you will need to:
+Install the following packages to follow along:
-* [Install](/oss/langchain/install) the LangChain package
-* Set up a [Claude (Anthropic)](https://www.anthropic.com/) account and get an API key
-* Set the `ANTHROPIC_API_KEY` environment variable in your terminal
+:::python
+
+ ```bash uv
+ uv init
+ uv add langchain deepagents
+ uv sync
+ ```
+
+ ```bash pip
+ pip install -U langchain deepagents
+ ```
+
+ ```bash venv
+ python3 -m venv .venv
+ source .venv/bin/activate
+ # Windows: .venv\Scripts\activate
+ pip install -U langchain deepagents
+ ```
+
+:::
+
+:::js
+
+ ```bash npm
+ npm install deepagents langchain @langchain/core
+ # Requires Node.js 20+
+ ```
+
+ ```bash pnpm
+ pnpm add deepagents langchain @langchain/core
+ # Requires Node.js 20+
+ ```
+
+ ```bash yarn
+ yarn add deepagents langchain @langchain/core
+ # Requires Node.js 20+
+ ```
+
+ ```bash bun
+ bun add deepagents langchain @langchain/core
+ # Requires Bun v1.0.0+
+ ```
+
+:::
+
+## Set up API keys
+
+Get an API key from [any supported model provider](/oss/integrations/providers/overview) (for example, Google Gemini or OpenAI).
-Although these examples use Claude, you can use [any supported model](/oss/integrations/providers/overview) by changing the model name in the code and setting up the appropriate API key.
+Set the API keys, for example:
+
+
+
+
+```bash
+export OPENAI_API_KEY="your-api-key"
+```
+
+
+
+
+```bash
+export GOOGLE_API_KEY="your-api-key"
+```
+
+
+
+
+```bash
+export ANTHROPIC_API_KEY="your-api-key"
+```
+
+
+
+
+```bash
+export OPENROUTER_API_KEY="your-api-key"
+```
+
+
+
+
+```bash
+export FIREWORKS_API_KEY="your-api-key"
+```
+
+
+
+
+```bash
+export BASETEN_API_KEY="your-api-key"
+```
+
+
+
+
+```bash
+# Local: Ollama must be running (https://ollama.com)
+# Cloud: Set your Ollama API key for hosted inference
+export OLLAMA_API_KEY="your-api-key"
+```
+
+
+
+
+```bash
+export AZURE_OPENAI_API_KEY="your-api-key"
+export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com"
+export AZURE_OPENAI_DEPLOYMENT_NAME="your-deployment"
+```
+
+
+
+
+```bash
+export AWS_ACCESS_KEY_ID="your-access-key"
+export AWS_SECRET_ACCESS_KEY="your-secret-key"
+export AWS_REGION="us-east-1"
+```
+
+
+
+
+```bash
+export HUGGINGFACEHUB_API_TOKEN="hf_..."
+```
+
+
+
+ See the full list of supported [chat model integrations](/oss/integrations/chat).
+
+
## Build a basic agent
-Start by creating a simple agent that can answer questions and call tools. The agent will use Claude Sonnet 4.6 as its language model, a basic weather function as a tool, and a simple prompt to guide its behavior.
+Start by creating a simple agent that can answer questions and call tools. The agent in this example uses the chosen language model, a basic weather function as a tool, and a simple prompt to guide its behavior:
:::python
-```python
-from langchain.agents import create_agent
-
-def get_weather(city: str) -> str:
- """Get weather for a given city."""
- return f"It's always sunny in {city}!"
-
-agent = create_agent(
- model="claude-sonnet-4-6",
- tools=[get_weather],
- system_prompt="You are a helpful assistant",
-)
-
-# Run the agent
-agent.invoke(
- {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
-)
-```
+
+ ```python OpenAI
+ from langchain.agents import create_agent
+
+ def get_weather(city: str) -> str:
+ """Get weather for a given city."""
+ return f"It's always sunny in {city}!"
+
+ agent = create_agent(
+ model="openai:gpt-5.2",
+ tools=[get_weather],
+ system_prompt="You are a helpful assistant",
+ )
+
+ result = agent.invoke(
+ {"messages": [{"role": "user", "content": "What's the weather in San Francisco?"}]}
+ )
+ print(result["messages"][-1].content_blocks)
+ ```
+ ```python Google Gemini
+ from langchain.agents import create_agent
+
+ def get_weather(city: str) -> str:
+ """Get weather for a given city."""
+ return f"It's always sunny in {city}!"
+
+ agent = create_agent(
+ model="google_genai:gemini-2.5-flash-lite",
+ tools=[get_weather],
+ system_prompt="You are a helpful assistant",
+ )
+
+ result = agent.invoke(
+ {"messages": [{"role": "user", "content": "What's the weather in San Francisco?"}]}
+ )
+ print(result["messages"][-1].content_blocks)
+ ```
+ ```python Claude (Anthropic)
+ from langchain.agents import create_agent
+
+ def get_weather(city: str) -> str:
+ """Get weather for a given city."""
+ return f"It's always sunny in {city}!"
+
+ agent = create_agent(
+ model="claude-sonnet-4-6",
+ tools=[get_weather],
+ system_prompt="You are a helpful assistant",
+ )
+
+ result = agent.invoke(
+ {"messages": [{"role": "user", "content": "What's the weather in San Francisco?"}]}
+ )
+ print(result["messages"][-1].content_blocks)
+ ```
+ ```python OpenRouter
+ from langchain.agents import create_agent
+
+ def get_weather(city: str) -> str:
+ """Get weather for a given city."""
+ return f"It's always sunny in {city}!"
+
+ agent = create_agent(
+ model="openrouter:anthropic/claude-sonnet-4-6",
+ tools=[get_weather],
+ system_prompt="You are a helpful assistant",
+ )
+
+ result = agent.invoke(
+ {"messages": [{"role": "user", "content": "What's the weather in San Francisco?"}]}
+ )
+ print(result["messages"][-1].content_blocks)
+ ```
+ ```python Fireworks
+ from langchain.agents import create_agent
+
+ def get_weather(city: str) -> str:
+ """Get weather for a given city."""
+ return f"It's always sunny in {city}!"
+
+ agent = create_agent(
+ model="fireworks:accounts/fireworks/models/qwen3p5-397b-a17b",
+ tools=[get_weather],
+ system_prompt="You are a helpful assistant",
+ )
+
+ result = agent.invoke(
+ {"messages": [{"role": "user", "content": "What's the weather in San Francisco?"}]}
+ )
+ print(result["messages"][-1].content_blocks)
+ ```
+ ```python Baseten
+ from langchain.agents import create_agent
+
+ def get_weather(city: str) -> str:
+ """Get weather for a given city."""
+ return f"It's always sunny in {city}!"
+
+ agent = create_agent(
+ model="baseten:zai-org/GLM-5",
+ tools=[get_weather],
+ system_prompt="You are a helpful assistant",
+ )
+
+ result = agent.invoke(
+ {"messages": [{"role": "user", "content": "What's the weather in San Francisco?"}]}
+ )
+ print(result["messages"][-1].content_blocks)
+ ```
+ ```python Ollama
+ from langchain.agents import create_agent
+
+ def get_weather(city: str) -> str:
+ """Get weather for a given city."""
+ return f"It's always sunny in {city}!"
+
+ agent = create_agent(
+ model="ollama:devstral-2",
+ tools=[get_weather],
+ system_prompt="You are a helpful assistant",
+ )
+
+ result = agent.invoke(
+ {"messages": [{"role": "user", "content": "What's the weather in San Francisco?"}]}
+ )
+ print(result["messages"][-1].content_blocks)
+ ```
+ ```python Azure
+ import os
+ from langchain.agents import create_agent
+
+ def get_weather(city: str) -> str:
+ """Get weather for a given city."""
+ return f"It's always sunny in {city}!"
+
+ agent = create_agent(
+ model="azure_openai:gpt-5.2",
+ tools=[get_weather],
+ system_prompt="You are a helpful assistant",
+ azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
+ )
+
+ result = agent.invoke(
+ {"messages": [{"role": "user", "content": "What's the weather in San Francisco?"}]}
+ )
+ print(result["messages"][-1].content_blocks)
+ ```
+ ```python AWS Bedrock
+ from langchain.agents import create_agent
+
+ def get_weather(city: str) -> str:
+ """Get weather for a given city."""
+ return f"It's always sunny in {city}!"
+
+ agent = create_agent(
+ model="anthropic.claude-3-5-sonnet-20240620-v1:0",
+ model_provider="bedrock_converse",
+ tools=[get_weather],
+ system_prompt="You are a helpful assistant",
+ )
+
+ result = agent.invoke(
+ {"messages": [{"role": "user", "content": "What's the weather in San Francisco?"}]}
+ )
+ print(result["messages"][-1].content_blocks)
+ ```
+ ```python HuggingFace
+ from langchain.agents import create_agent
+
+ def get_weather(city: str) -> str:
+ """Get weather for a given city."""
+ return f"It's always sunny in {city}!"
+
+ agent = create_agent(
+ model="microsoft/Phi-3-mini-4k-instruct",
+ model_provider="huggingface",
+ tools=[get_weather],
+ system_prompt="You are a helpful assistant",
+ temperature=0.7,
+ max_tokens=1024,
+ )
+
+ result = agent.invoke(
+ {"messages": [{"role": "user", "content": "What's the weather in San Francisco?"}]}
+ )
+ print(result["messages"][-1].content_blocks)
+ ```
+
:::
:::js
-```ts
-import { createAgent, tool } from "langchain";
-import * as z from "zod";
-
-const getWeather = tool(
- (input) => `It's always sunny in ${input.city}!`,
- {
- name: "get_weather",
- description: "Get the weather for a given city",
- schema: z.object({
- city: z.string().describe("The city to get the weather for"),
- }),
- }
-);
-
-const agent = createAgent({
- model: "claude-sonnet-4-6",
- tools: [getWeather],
-});
-
-console.log(
- await agent.invoke({
- messages: [{ role: "user", content: "What's the weather in Tokyo?" }],
- })
-);
-```
+
+ ```ts OpenAI
+ import { createAgent, tool } from "langchain";
+ import * as z from "zod";
+
+ const getWeather = tool(
+ (input) => `It's always sunny in ${input.city}!`,
+ {
+ name: "get_weather",
+ description: "Get the weather for a given city",
+ schema: z.object({
+ city: z.string().describe("The city to get the weather for"),
+ }),
+ }
+ );
+
+ const agent = createAgent({
+ model: "gpt-5.2",
+ tools: [getWeather],
+ });
+
+ console.log(
+ await agent.invoke({
+ messages: [{ role: "user", content: "What's the weather in San Francisco?" }],
+ })
+ );
+ ```
+ ```ts Google Gemini
+ import { createAgent, tool } from "langchain";
+ import * as z from "zod";
+
+ const getWeather = tool(
+ (input) => `It's always sunny in ${input.city}!`,
+ {
+ name: "get_weather",
+ description: "Get the weather for a given city",
+ schema: z.object({
+ city: z.string().describe("The city to get the weather for"),
+ }),
+ }
+ );
+
+ const agent = createAgent({
+ model: "google-genai:gemini-2.5-flash-lite",
+ tools: [getWeather],
+ });
+
+ console.log(
+ await agent.invoke({
+ messages: [{ role: "user", content: "What's the weather in San Francisco?" }],
+ })
+ );
+ ```
+ ```ts Claude (Anthropic)
+ import { createAgent, tool } from "langchain";
+ import * as z from "zod";
+
+ const getWeather = tool(
+ (input) => `It's always sunny in ${input.city}!`,
+ {
+ name: "get_weather",
+ description: "Get the weather for a given city",
+ schema: z.object({
+ city: z.string().describe("The city to get the weather for"),
+ }),
+ }
+ );
+
+ const agent = createAgent({
+ model: "claude-sonnet-4-6",
+ tools: [getWeather],
+ });
+
+ console.log(
+ await agent.invoke({
+ messages: [{ role: "user", content: "What's the weather in San Francisco?" }],
+ })
+ );
+ ```
+ ```ts OpenRouter
+ import { createAgent, tool } from "langchain";
+ import * as z from "zod";
+
+ const getWeather = tool(
+ (input) => `It's always sunny in ${input.city}!`,
+ {
+ name: "get_weather",
+ description: "Get the weather for a given city",
+ schema: z.object({
+ city: z.string().describe("The city to get the weather for"),
+ }),
+ }
+ );
+
+ const agent = createAgent({
+ model: "openrouter:anthropic/claude-sonnet-4-6",
+ tools: [getWeather],
+ });
+
+ console.log(
+ await agent.invoke({
+ messages: [{ role: "user", content: "What's the weather in San Francisco?" }],
+ })
+ );
+ ```
+ ```ts Fireworks
+ import { createAgent, tool } from "langchain";
+ import * as z from "zod";
+
+ const getWeather = tool(
+ (input) => `It's always sunny in ${input.city}!`,
+ {
+ name: "get_weather",
+ description: "Get the weather for a given city",
+ schema: z.object({
+ city: z.string().describe("The city to get the weather for"),
+ }),
+ }
+ );
+
+ const agent = createAgent({
+ model: "fireworks:accounts/fireworks/models/qwen3p5-397b-a17b",
+ tools: [getWeather],
+ });
+
+ console.log(
+ await agent.invoke({
+ messages: [{ role: "user", content: "What's the weather in San Francisco?" }],
+ })
+ );
+ ```
+ ```ts Baseten
+ import { createAgent, tool } from "langchain";
+ import * as z from "zod";
+
+ const getWeather = tool(
+ (input) => `It's always sunny in ${input.city}!`,
+ {
+ name: "get_weather",
+ description: "Get the weather for a given city",
+ schema: z.object({
+ city: z.string().describe("The city to get the weather for"),
+ }),
+ }
+ );
+
+ const agent = createAgent({
+ model: "baseten:zai-org/GLM-5",
+ tools: [getWeather],
+ });
+
+ console.log(
+ await agent.invoke({
+ messages: [{ role: "user", content: "What's the weather in San Francisco?" }],
+ })
+ );
+ ```
+ ```ts Ollama
+ import { createAgent, tool } from "langchain";
+ import * as z from "zod";
+
+ const getWeather = tool(
+ (input) => `It's always sunny in ${input.city}!`,
+ {
+ name: "get_weather",
+ description: "Get the weather for a given city",
+ schema: z.object({
+ city: z.string().describe("The city to get the weather for"),
+ }),
+ }
+ );
+
+ const agent = createAgent({
+ model: "ollama:devstral-2",
+ tools: [getWeather],
+ });
+
+ console.log(
+ await agent.invoke({
+ messages: [{ role: "user", content: "What's the weather in San Francisco?" }],
+ })
+ );
+ ```
+ ```ts Azure
+ import { createAgent, tool } from "langchain";
+ import * as z from "zod";
+
+ const getWeather = tool(
+ (input) => `It's always sunny in ${input.city}!`,
+ {
+ name: "get_weather",
+ description: "Get the weather for a given city",
+ schema: z.object({
+ city: z.string().describe("The city to get the weather for"),
+ }),
+ }
+ );
+
+ const agent = createAgent({
+ model: "azure_openai:gpt-5.2",
+ tools: [getWeather],
+ });
+
+ console.log(
+ await agent.invoke({
+ messages: [{ role: "user", content: "What's the weather in San Francisco?" }],
+ })
+ );
+ ```
+ ```ts AWS Bedrock
+ import { createAgent, tool } from "langchain";
+ import * as z from "zod";
+
+ const getWeather = tool(
+ (input) => `It's always sunny in ${input.city}!`,
+ {
+ name: "get_weather",
+ description: "Get the weather for a given city",
+ schema: z.object({
+ city: z.string().describe("The city to get the weather for"),
+ }),
+ }
+ );
+
+ const agent = createAgent({
+ model: "bedrock:gpt-5.2",
+ tools: [getWeather],
+ });
+
+ console.log(
+ await agent.invoke({
+ messages: [{ role: "user", content: "What's the weather in San Francisco?" }],
+ })
+ );
+ ```
+
:::
+When you run the code and prompt the agent to tell you about the weather in San Francisco, the agent uses that input and its available context.
+The agent understands that you are asking about the weather for the city San Francisco and therefore calls the weather tool with the provided city name.
+
- To learn how to trace your agent with LangSmith, see the [LangSmith documentation](/langsmith/trace-with-langchain).
+ You can use [any supported model](/oss/integrations/providers/overview) by changing the model name in the code and setting up the appropriate API key.
## Build a real-world agent
-Next, build a practical weather forecasting agent that demonstrates key production concepts:
+In the following example you will build a research agent that can answer questions about text files.
+Along the way you will explore the following concepts:
1. **Detailed system prompts** for better agent behavior
-2. **Create tools** that integrate with external data
-3. **Model configuration** for consistent responses
-4. **[Structured output](/oss/langchain/structured-output)** for predictable results
-5. **Conversational memory** for chat-like interactions
-6. **Create and run the agent** to test the fully functional agent
-
-Let's walk through each step:
+1. **Create tools** that integrate with external data
+1. **Model configuration** for consistent responses
+1. **Conversational memory** for chat-like interactions
+1. **Deep Agents** for built-in features
+1. **Testing** your agent
@@ -98,56 +608,56 @@ Let's walk through each step:
:::python
```python wrap
- SYSTEM_PROMPT = """You are an expert weather forecaster, who speaks in puns.
-
- You have access to two tools:
+ SYSTEM_PROMPT = """You are a literary data assistant.
- - get_weather_for_location: use this to get the weather for a specific location
- - get_user_location: use this to get the user's location
+ ## Capabilities
- If a user asks you for the weather, make sure you know the location. If you can tell from the question that they mean wherever they are, use the get_user_location tool to find their location."""
+ - `fetch_text_from_url`: loads document text from a URL into the conversation.
+ Do not guess line counts or positions—ground them in tool results from the saved file."""
```
:::
:::js
```ts wrap
- const systemPrompt = `You are an expert weather forecaster, who speaks in puns.
+ const SYSTEM_PROMPT = `You are a literary data assistant.
- You have access to two tools:
+ ## Capabilities
- - get_weather_for_location: use this to get the weather for a specific location
- - get_user_location: use this to get the user's location
-
- If a user asks you for the weather, make sure you know the location. If you can tell from the question that they mean wherever they are, use the get_user_location tool to find their location.`;
+ - \`fetch_text_from_url\`: loads document text from a URL into the conversation.
+ Do not guess line counts or positions—ground them in tool results from the saved file.`;
```
:::
+
- :::python
[Tools](/oss/langchain/tools) let a model interact with external systems by calling functions you define.
Tools can depend on [runtime context](/oss/langchain/runtime) and also interact with [agent memory](/oss/langchain/short-term-memory).
- Notice below how the `get_user_location` tool uses runtime context:
+ This example uses a tool to load a document from a given URL:
+ :::python
```python
- from dataclasses import dataclass
- from langchain.tools import tool, ToolRuntime
+ import urllib.error
+ import urllib.request
- @tool
- def get_weather_for_location(city: str) -> str:
- """Get weather for a given city."""
- return f"It's always sunny in {city}!"
+ from langchain.tools import tool
- @dataclass
- class Context:
- """Custom runtime context schema."""
- user_id: str
@tool
- def get_user_location(runtime: ToolRuntime[Context]) -> str:
- """Retrieve user information based on user ID."""
- user_id = runtime.context.user_id
- return "Florida" if user_id == "1" else "SF"
+ def fetch_text_from_url(url: str) -> str:
+ """Fetch the document from a URL.
+ """
+ req = urllib.request.Request(
+ url,
+ headers={"User-Agent": "Mozilla/5.0 (compatible; quickstart-research/1.0)"},
+ )
+ try:
+ with urllib.request.urlopen(req, timeout=120) as resp:
+ raw = resp.read()
+ except urllib.error.URLError as e:
+ return f"Fetch failed: {e}"
+ text = raw.decode("utf-8", errors="replace")
+ return text
```
@@ -158,34 +668,39 @@ Let's walk through each step:
:::
:::js
- [Tools](/oss/langchain/tools) are functions your agent can call. Oftentimes tools will want to connect to external systems, and will rely on runtime configuration to do so. Notice here how the `getUserLocation` tool does exactly that:
```ts
- import { tool, type ToolRuntime } from "langchain";
- import * as z from "zod";
-
- const getWeather = tool(
- (input) => `It's always sunny in ${input.city}!`,
- {
- name: "get_weather_for_location",
- description: "Get the weather for a given city",
- schema: z.object({
- city: z.string().describe("The city to get the weather for"),
- }),
- }
- );
-
- type AgentRuntime = ToolRuntime;
-
- const getUserLocation = tool(
- (_, config: AgentRuntime) => {
- const { user_id } = config.context;
- return user_id === "1" ? "Florida" : "SF";
- },
- {
- name: "get_user_location",
- description: "Retrieve user information based on user ID",
- }
+ import { tool } from "@langchain/core/tools";
+ import { createAgent, initChatModel } from "langchain";
+ import { z } from "zod";
+
+ const fetchTextFromUrl = tool(
+ async ({ url }: { url: string }): Promise => {
+ const controller = new AbortController();
+ const timeoutId = setTimeout(() => controller.abort(), 120_000);
+ try {
+ const resp = await fetch(url, {
+ headers: {
+ "User-Agent": "Mozilla/5.0 (compatible; quickstart-research/1.0)",
+ },
+ signal: controller.signal,
+ });
+ if (!resp.ok) {
+ return `Fetch failed: HTTP ${resp.status} ${resp.statusText}`;
+ }
+ return await resp.text();
+ } catch (e) {
+ const msg = e instanceof Error ? e.message : String(e);
+ return `Fetch failed: ${msg}`;
+ } finally {
+ clearTimeout(timeoutId);
+ }
+ },
+ {
+ name: "fetch_text_from_url",
+ description: "Fetch the document from a URL.",
+ schema: z.object({ url: z.string().url() }),
+ },
);
```
@@ -196,91 +711,256 @@ Let's walk through each step:
```ts
- const getWeather = tool(
- ({ city }) => `It's always sunny in ${city}!`,
- {
- name: "get_weather_for_location",
- description: "Get the weather for a given city",
+ import { tool } from "langchain";
+
+ const fetchTextFromUrl = tool(
+ async ({ url }: { url: string }): Promise => {
+ const controller = new AbortController();
+ const timeoutId = setTimeout(() => controller.abort(), 120_000);
+ try {
+ const resp = await fetch(url, {
+ headers: {
+ "User-Agent": "Mozilla/5.0 (compatible; quickstart-research/1.0)",
+ },
+ signal: controller.signal,
+ });
+ if (!resp.ok) {
+ return `Fetch failed: HTTP ${resp.status} ${resp.statusText}`;
+ }
+ return await resp.text();
+ } catch (e) {
+ const msg = e instanceof Error ? e.message : String(e);
+ return `Fetch failed: ${msg}`;
+ } finally {
+ clearTimeout(timeoutId);
+ }
+ },
+ {
+ name: "fetch_text_from_url",
+ description: "Fetch the document from a URL.",
schema: {
- type: "object",
- properties: {
- city: {
- type: "string",
- description: "The city to get the weather for"
- }
- },
- required: ["city"]
+ type: "object",
+ properties: {
+ url: {
+ type: "string",
+ description: "The URL of the document to fetch.",
+ format: "uri",
+ },
+ },
+ required: ["url"],
},
- }
+ },
);
- ```
+ ```
:::
- Set up your [language model](/oss/langchain/models) with the right parameters for your use case:
+ Set up your [language model](/oss/langchain/models) with the right parameters for your use case. For example:
:::python
-
- ```python
- from langchain.chat_models import init_chat_model
-
- model = init_chat_model(
- "claude-sonnet-4-6",
- temperature=0.5,
- timeout=10,
- max_tokens=1000
- )
- ```
+
+ ```python OpenAI
+ from langchain.chat_models import init_chat_model
+
+ model = init_chat_model(
+ "openai:gpt-5.2",
+ temperature=0.5,
+ timeout=300,
+ max_tokens=25000,
+ )
+ ```
+ ```python Google Gemini
+ from langchain.chat_models import init_chat_model
+
+ model = init_chat_model(
+ "gemini-3.1-pro-preview",
+ model_provider="google-genai",
+ temperature=0.5,
+ timeout=600,
+ max_tokens=25000,
+ streaming=True,
+ )
+ ```
+ ```python Claude (Anthropic)
+ from langchain.chat_models import init_chat_model
+
+ model = init_chat_model(
+ "claude-sonnet-4-6",
+ temperature=0.5,
+ timeout=600,
+ max_tokens=25000,
+ streaming=True,
+ )
+ ```
+ ```python OpenRouter
+ from langchain.chat_models import init_chat_model
+
+ model = init_chat_model(
+ "openrouter:anthropic/claude-sonnet-4-6",
+ temperature=0.5,
+ timeout=300,
+ max_tokens=25000,
+ )
+ ```
+ ```python Fireworks
+ from langchain.chat_models import init_chat_model
+
+ model = init_chat_model(
+ "fireworks:accounts/fireworks/models/qwen3p5-397b-a17b",
+ temperature=0.5,
+ timeout=300,
+ max_tokens=25000,
+ )
+ ```
+ ```python Baseten
+ from langchain.chat_models import init_chat_model
+
+ model = init_chat_model(
+ "baseten:zai-org/GLM-5",
+ temperature=0.5,
+ timeout=300,
+ max_tokens=25000,
+ )
+ ```
+ ```python Ollama
+ from langchain.chat_models import init_chat_model
+
+ model = init_chat_model(
+ "ollama:devstral-2",
+ temperature=0.5,
+ timeout=300,
+ max_tokens=25000,
+ )
+ ```
+ ```python Azure
+ import os
+ from langchain.chat_models import init_chat_model
+
+ model = init_chat_model(
+ "azure_openai:gpt-5.2",
+ temperature=0.5,
+ timeout=300,
+ max_tokens=25000,
+ azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
+ )
+ ```
+ ```python AWS Bedrock
+ from langchain.chat_models import init_chat_model
+
+ model = init_chat_model(
+ "anthropic.claude-3-5-sonnet-20240620-v1:0",
+ model_provider="bedrock_converse",
+ temperature=0.5,
+ timeout=300,
+ max_tokens=25000,
+ )
+ ```
+ ```python HuggingFace
+ from langchain.chat_models import init_chat_model
+
+ model = init_chat_model(
+ "microsoft/Phi-3-mini-4k-instruct",
+ model_provider="huggingface",
+ temperature=0.5,
+ timeout=300,
+ max_tokens=25000,
+ )
+ ```
+
:::
:::js
+
+ ```ts OpenAI
+ import { initChatModel } from "langchain";
+
+ const model = await initChatModel("gpt-5.2", {
+ temperature: 0.5,
+ timeout: 300,
+ maxTokens: 25000,
+ });
+ ```
+ ```ts Google Gemini
+ import { initChatModel } from "langchain";
+
+ const model = await initChatModel("gemini-3.1-pro-preview", {
+ modelProvider: "google-genai",
+ temperature: 0.5,
+ timeout: 600_000,
+ maxTokens: 25000,
+ });
+ ```
+ ```ts Claude (Anthropic)
+ import { initChatModel } from "langchain";
+
+ const model = await initChatModel("claude-sonnet-4-6", {
+ temperature: 0.5,
+ timeout: 300,
+ maxTokens: 25000,
+ });
+ ```
+ ```ts OpenRouter
+ import { initChatModel } from "langchain";
+
+ const model = await initChatModel("openrouter:anthropic/claude-sonnet-4-6", {
+ temperature: 0.5,
+ timeout: 300,
+ maxTokens: 25000,
+ });
+ ```
+ ```ts Fireworks
+ import { initChatModel } from "langchain";
- ```ts
- import { initChatModel } from "langchain";
-
- const model = await initChatModel(
- "claude-sonnet-4-6",
- { temperature: 0.5, timeout: 10, maxTokens: 1000 }
- );
- ```
+ const model = await initChatModel(
+ "fireworks:accounts/fireworks/models/qwen3p5-397b-a17b",
+ { temperature: 0.5, timeout: 300, maxTokens: 25000 }
+ );
+ ```
+ ```ts Baseten
+ import { initChatModel } from "langchain";
+
+ const model = await initChatModel("baseten:zai-org/GLM-5", {
+ temperature: 0.5,
+ timeout: 300,
+ maxTokens: 25000,
+ });
+ ```
+ ```ts Ollama
+ import { initChatModel } from "langchain";
+
+ const model = await initChatModel("ollama:devstral-2", {
+ temperature: 0.5,
+ timeout: 300,
+ maxTokens: 25000,
+ });
+ ```
+ ```ts Azure
+ import { initChatModel } from "langchain";
+
+ const model = await initChatModel("azure_openai:gpt-5.2", {
+ temperature: 0.5,
+ timeout: 300,
+ maxTokens: 25000,
+ });
+ ```
+ ```ts AWS Bedrock
+ import { initChatModel } from "langchain";
+
+ const model = await initChatModel("bedrock:gpt-5.2", {
+ temperature: 0.5,
+ timeout: 300,
+ maxTokens: 25000,
+ });
+ ```
+
:::
Depending on the model and provider chosen, initialization parameters may vary; refer to their reference pages for details.
-
- :::python
- Optionally, define a [structured response format](/oss/langchain/structured-output) if you need the agent responses to match
- a specific schema.
-
- ```python
- from dataclasses import dataclass
-
- # We use a dataclass here, but Pydantic models are also supported.
- @dataclass
- class ResponseFormat:
- """Response schema for the agent."""
- # A punny response (always required)
- punny_response: str
- # Any interesting information about the weather if available
- weather_conditions: str | None = None
- ```
- :::
-
- :::js
- Optionally, define a [structured response format](/oss/langchain/structured-output) if you need the agent responses to match
- a specific schema.
-
- ```ts
- const responseFormat = z.object({
- punny_response: z.string(),
- weather_conditions: z.string().optional(),
- });
- ```
- :::
-
+
Add [memory](/oss/langchain/short-term-memory) to your agent to maintain state across interactions. This allows
the agent to remember previous conversations and context.
@@ -304,303 +984,410 @@ Let's walk through each step:
In production, use a persistent checkpointer that saves message history to a database.
See [Add and manage memory](/oss/langgraph/add-memory#manage-short-term-memory) for more details.
+
- Now assemble your agent with all the components and run it!
+
+ Now assemble your agent with all the components and run it.
+
+ There are two different frameworks for creating agents: LangChain agents and deep agents.
+ Both LangChain and deep agents provide you with fine-grained control over tools, memory, and more.
+ The main difference between both is that deep agents come with a range of commonly useful capabilities already built in, such as planning, file system tools, and subagents.
+
+ Use deep agents when you want maximum capability with minimal setup; choose LangChain agents when you need fine-grained control.
+
+
+ Since the code invokes the model with the entire text from The Great Gatsby, it uses a large amount of tokens.
+
+ You can view example output in the next step.
+
+
+ Let's try both:
:::python
- ```python
- from langchain.agents.structured_output import ToolStrategy
+ ```python wrap
+ from langchain.agents import create_agent
+ from deepagents import create_deep_agent
agent = create_agent(
model=model,
+ tools=[fetch_text_from_url],
system_prompt=SYSTEM_PROMPT,
- tools=[get_user_location, get_weather_for_location],
- context_schema=Context,
- response_format=ToolStrategy(ResponseFormat),
- checkpointer=checkpointer
+ checkpointer=checkpointer,
)
- # `thread_id` is a unique identifier for a given conversation.
- config = {"configurable": {"thread_id": "1"}}
-
- response = agent.invoke(
- {"messages": [{"role": "user", "content": "what is the weather outside?"}]},
- config=config,
- context=Context(user_id="1")
+ deep_agent = create_deep_agent(
+ model=model,
+ tools=[fetch_text_from_url],
+ system_prompt=SYSTEM_PROMPT,
+ checkpointer=checkpointer,
)
- print(response['structured_response'])
- # ResponseFormat(
- # punny_response="Florida is still having a 'sun-derful' day! The sunshine is playing 'ray-dio' hits all day long! I'd say it's the perfect weather for some 'solar-bration'! If you were hoping for rain, I'm afraid that idea is all 'washed up' - the forecast remains 'clear-ly' brilliant!",
- # weather_conditions="It's always sunny in Florida!"
- # )
+ content = f"""Project Gutenberg hosts a full plain-text copy of F. Scott Fitzgerald's The Great Gatsby.
+ URL: https://www.gutenberg.org/files/64317/64317-0.txt
+ Answer as much as you can:
- # Note that we can continue the conversation using the same `thread_id`.
- response = agent.invoke(
- {"messages": [{"role": "user", "content": "thank you!"}]},
- config=config,
- context=Context(user_id="1")
- )
+ 1) How many lines in the complete Gutenberg file contain the substring `Gatsby` (count lines, not occurrences within a line, each line ends with a line break).
+ 2) The 1-based line number of the first line in the file that contains `Daisy`.
+ 3) A two-sentence neutral synopsis.
+
+ Do your best on (1) and (2). If at any point you realize you cannot **verify** an exact answer with
+ your available tools and reasoning, do not fabricate numbers: use `null` for that field and spell out
+ the limitation in `how_you_computed_counts`. If you encounter any errors please report what the error was and what the error message was."""
- print(response['structured_response'])
- # ResponseFormat(
- # punny_response="You're 'thund-erfully' welcome! It's always a 'breeze' to help you stay 'current' with the weather. I'm just 'cloud'-ing around waiting to 'shower' you with more forecasts whenever you need them. Have a 'sun-sational' day in the Florida sunshine!",
- # weather_conditions=None
- # )
+ agent_result = agent.invoke(
+ {"messages": [{"role": "user", "content": content}]},
+ config={"configurable": {"thread_id": "great-gatsby-lc"}},
+ )
+ deep_agent_result = deep_agent.invoke(
+ {"messages": [{"role": "user", "content": content}]},
+ config={"configurable": {"thread_id": "great-gatsby-da"}},
+ )
+ print(agent_result["messages"][-1].content_blocks)
+ print("\n")
+ print(deep_agent_result["messages"][-1].content_blocks)
```
:::
+
:::js
+
```ts
- import { createAgent } from "langchain";
-
- const agent = createAgent({
- model: "claude-sonnet-4-6",
- systemPrompt: systemPrompt,
- tools: [getUserLocation, getWeather],
- responseFormat,
- checkpointer,
+ async function main() {
+ const agent = createAgent({
+ model,
+ tools: [fetchTextFromUrl],
+ systemPrompt: SYSTEM_PROMPT,
+ checkpointer,
+ });
+
+ const deepAgent = createDeepAgent({
+ model,
+ tools: [fetchTextFromUrl],
+ systemPrompt: SYSTEM_PROMPT,
+ checkpointer,
+ });
+
+ const content = `Project Gutenberg hosts a full plain-text copy of F. Scott Fitzgerald's The Great Gatsby.
+ URL: https://www.gutenberg.org/files/64317/64317-0.txt
+
+ Answer as much as you can:
+
+ 1) How many lines in the complete Gutenberg file contain the substring \`Gatsby\` (count lines, not occurrences within a line, each line ends with a line break).
+ 2) The 1-based line number of the first line in the file that contains \`Daisy\`.
+ 3) A two-sentence neutral synopsis.
+
+ Do your best on (1) and (2). If at any point you realize you cannot **verify** an exact answer with
+ your available tools and reasoning, do not fabricate numbers: use \`null\` for that field and spell out
+ the limitation in \`how_you_computed_counts\`. If you encounter any errors please report what the error was and what the error message was.`;
+
+ const agentResult = await agent.invoke(
+ { messages: [{ role: "user", content }] },
+ { configurable: { thread_id: "great-gatsby-lc" } },
+ );
+ const deepAgentResult = await deepAgent.invoke(
+ { messages: [{ role: "user", content }] },
+ { configurable: { thread_id: "great-gatsby-da" } },
+ );
+
+ const agentMessages = agentResult.messages;
+ const deepMessages = deepAgentResult.messages;
+ console.log(agentMessages[agentMessages.length - 1]!.content_blocks);
+ console.log("\n");
+ console.log(deepMessages[deepMessages.length - 1]!.content_blocks);
+ }
+
+ main().catch((err) => {
+ console.error(err);
+ process.exitCode = 1;
});
+ ```
+ :::
- // `thread_id` is a unique identifier for a given conversation.
- const config = {
- configurable: { thread_id: "1" },
- context: { user_id: "1" },
- };
+
- const response = await agent.invoke(
- { messages: [{ role: "user", content: "what is the weather outside?" }] },
- config
- );
- console.log(response.structuredResponse);
- // {
- // punny_response: "Florida is still having a 'sun-derful' day ...",
- // weather_conditions: "It's always sunny in Florida!"
- // }
-
- // Note that we can continue the conversation using the same `thread_id`.
- const thankYouResponse = await agent.invoke(
- { messages: [{ role: "user", content: "thank you!" }] },
- config
+ :::python
+ ```python wrap
+ import urllib.error
+ import urllib.request
+
+ from langchain.agents import create_agent
+ from deepagents import create_deep_agent
+ from langchain.chat_models import init_chat_model
+ from langchain.tools import tool
+ from langgraph.checkpoint.memory import InMemorySaver
+
+ SYSTEM_PROMPT = """You are a literary data assistant.
+
+ ## Capabilities
+
+ - `fetch_text_from_url`: loads document text from a URL into the conversation.
+ Do not guess line counts or positions—ground them in tool results from the saved file."""
+
+
+ @tool
+ def fetch_text_from_url(url: str) -> str:
+ """Fetch the document from a URL.
+ """
+ req = urllib.request.Request(
+ url,
+ headers={"User-Agent": "Mozilla/5.0 (compatible; quickstart-research/1.0)"},
+ )
+ try:
+ with urllib.request.urlopen(req, timeout=120) as resp:
+ raw = resp.read()
+ except urllib.error.URLError as e:
+ return f"Fetch failed: {e}"
+ text = raw.decode("utf-8", errors="replace")
+ return text
+
+
+ model = init_chat_model(
+ "gemini-3.1-pro-preview",
+ model_provider="google-genai",
+ temperature=0.5,
+ timeout=600,
+ max_tokens=25000,
+ streaming=True,
+ )
+
+ checkpointer = InMemorySaver()
+
+ agent = create_agent(
+ model=model,
+ tools=[fetch_text_from_url],
+ system_prompt=SYSTEM_PROMPT,
+ checkpointer=checkpointer,
+ )
+
+ deep_agent = create_deep_agent(
+ model=model,
+ tools=[fetch_text_from_url],
+ system_prompt=SYSTEM_PROMPT,
+ checkpointer=checkpointer,
+ )
+
+ content = f"""Project Gutenberg hosts a full plain-text copy of F. Scott Fitzgerald's The Great Gatsby.
+ URL: https://www.gutenberg.org/files/64317/64317-0.txt
+
+ Answer as much as you can:
+
+ 1) How many lines in the complete Gutenberg file contain the substring `Gatsby` (count lines, not occurrences within a line, each line ends with a line break).
+ 2) The 1-based line number of the first line in the file that contains `Daisy`.
+ 3) A two-sentence neutral synopsis.
+
+ Do your best on (1) and (2). If at any point you realize you cannot **verify** an exact answer with
+ your available tools and reasoning, do not fabricate numbers: use `null` for that field and spell out
+ the limitation in `how_you_computed_counts`. If you encounter any errors please report what the error was and what the error message was."""
+
+ agent_result = agent.invoke(
+ {"messages": [{"role": "user", "content": content}]},
+ config={"configurable": {"thread_id": "great-gatsby-lc"}},
+ )
+ deep_agent_result = deep_agent.invoke(
+ {"messages": [{"role": "user", "content": content}]},
+ config={"configurable": {"thread_id": "great-gatsby-da"}},
+ )
+ print(agent_result["messages"][-1].content_blocks)
+ print("\n")
+ print(deep_agent_result["messages"][-1].content_blocks)
+ ```
+ :::
+
+ :::js
+ ```ts wrap
+ import { MemorySaver } from "@langchain/langgraph";
+ import { createDeepAgent } from "deepagents";
+ import { tool } from "@langchain/core/tools";
+ import { createAgent, initChatModel } from "langchain";
+ import { z } from "zod";
+ const SYSTEM_PROMPT = `You are a literary data assistant.
+
+ ## Capabilities
+
+ - \`fetch_text_from_url\`: loads document text from a URL into the conversation.
+ Do not guess line counts or positions—ground them in tool results from the saved file.`;
+
+ const fetchTextFromUrl = tool(
+ async ({ url }: { url: string }): Promise => {
+ const controller = new AbortController();
+ const timeoutId = setTimeout(() => controller.abort(), 120_000);
+ try {
+ const resp = await fetch(url, {
+ headers: {
+ "User-Agent": "Mozilla/5.0 (compatible; quickstart-research/1.0)",
+ },
+ signal: controller.signal,
+ });
+ if (!resp.ok) {
+ return `Fetch failed: HTTP ${resp.status} ${resp.statusText}`;
+ }
+ return await resp.text();
+ } catch (e) {
+ const msg = e instanceof Error ? e.message : String(e);
+ return `Fetch failed: ${msg}`;
+ } finally {
+ clearTimeout(timeoutId);
+ }
+ },
+ {
+ name: "fetch_text_from_url",
+ description: "Fetch the document from a URL.",
+ schema: z.object({ url: z.string().url() }),
+ },
);
- console.log(thankYouResponse.structuredResponse);
- // {
- // punny_response: "You're 'thund-erfully' welcome! ...",
- // weather_conditions: undefined
- // }
+
+ const model = await initChatModel("gemini-3.1-pro-preview", {
+ modelProvider: "google-genai",
+ temperature: 0.5,
+ timeout: 600_000,
+ maxTokens: 25000,
+ streaming: true,
+ });
+
+ const checkpointer = new MemorySaver();
+
+ async function main() {
+ const agent = createAgent({
+ model,
+ tools: [fetchTextFromUrl],
+ systemPrompt: SYSTEM_PROMPT,
+ checkpointer,
+ });
+
+ const deepAgent = createDeepAgent({
+ model,
+ tools: [fetchTextFromUrl],
+ systemPrompt: SYSTEM_PROMPT,
+ checkpointer,
+ });
+
+ const content = `Project Gutenberg hosts a full plain-text copy of F. Scott Fitzgerald's The Great Gatsby.
+ URL: https://www.gutenberg.org/files/64317/64317-0.txt
+
+ Answer as much as you can:
+
+ 1) How many lines in the complete Gutenberg file contain the substring \`Gatsby\` (count lines, not occurrences within a line, each line ends with a line break).
+ 2) The 1-based line number of the first line in the file that contains \`Daisy\`.
+ 3) A two-sentence neutral synopsis.
+
+ Do your best on (1) and (2). If at any point you realize you cannot **verify** an exact answer with
+ your available tools and reasoning, do not fabricate numbers: use \`null\` for that field and spell out
+ the limitation in \`how_you_computed_counts\`. If you encounter any errors please report what the error was and what the error message was.`;
+
+ const agentResult = await agent.invoke(
+ { messages: [{ role: "user", content }] },
+ { configurable: { thread_id: "great-gatsby-lc" } },
+ );
+ const deepAgentResult = await deepAgent.invoke(
+ { messages: [{ role: "user", content }] },
+ { configurable: { thread_id: "great-gatsby-da" } },
+ );
+
+ const agentMessages = agentResult.messages;
+ const deepMessages = deepAgentResult.messages;
+ console.log(agentMessages[agentMessages.length - 1]!.content_blocks);
+ console.log("\n");
+ console.log(deepMessages[deepMessages.length - 1]!.content_blocks);
+ }
+
+ main().catch((err) => {
+ console.error(err);
+ process.exitCode = 1;
+ });
```
:::
+
+
+
+
+
+ The results will differ based on the model and the execution.
+
+
+
+
+ ````txt wrap expandable
+ **1) Number of lines containing `Gatsby`:** `null`
+
+ **2) First line containing `Daisy`:** `null`
+
+ **3) Synopsis:**
+ The Great Gatsby follows the mysterious millionaire Jay Gatsby and his obsession with reuniting with his former lover, Daisy Buchanan, as narrated by his neighbor Nick Carraway. Set against the backdrop of the Roaring Twenties on Long Island, the novel explores themes of wealth, class, and the elusive nature of the American Dream.
+
+ **how_you_computed_counts:**
+ I successfully fetched the full text of the eBook using the `fetch_text_from_url` tool. However, because I do not have access to a code execution environment (like Python) or text-processing tools (like `grep`), I cannot deterministically split the text by line breaks, iterate through the thousands of lines, and verify the exact line numbers or match counts. LLMs cannot reliably perform exact line-counting or indexing over massive texts within their context window without external computational tools. As instructed, rather than fabricating or guessing a number, I have output `null` for the exact counts and positions.
+ ````
+
+
+
+
+ ````txt wrap expandable
+ Based on the text fetched directly from the Gutenberg URL and analyzed using filesystem search tools, here are the answers to your questions:
+
+ **1) Lines containing the substring `Gatsby`**
+ **258** lines contain the exact substring `Gatsby`.
+
+ **2) First line containing `Daisy`**
+ Line **181** is the first line in the file that contains the exact substring `Daisy`.
+ *(For context, the line reads: "Buchanans. Daisy was my second cousin once removed, and I’d known Tom")*
+
+ **3) Two-sentence neutral synopsis**
+ *The Great Gatsby* follows the mysterious millionaire Jay Gatsby and his obsessive pursuit to reunite with his former lover, Daisy Buchanan, in 1920s Long Island. The story is narrated by Nick Carraway, who observes the tragic consequences of Gatsby's relentless ambition and the shallow materialism of the era's wealthy elite.
+
+ ***
+
+ **How counts were computed:**
+ When fetching the document from the URL, the file was too large for the standard output and was automatically saved to the local filesystem by the system (`/large_tool_results/x246ax2x`). I then used the `grep` tool to search the saved file for the exact literal substrings `Gatsby` and `Daisy`. The `grep` tool returned every matching line along with its 1-based line number. I manually counted the exact number of lines returned for `Gatsby` (which totaled 258) and identified the first line number returned for `Daisy` (which was 181). I also verified there were no uppercase variations (`GATSBY` or `DAISY`) that would have been missed. No errors were encountered during this process.
+ ````
+
+
+
+
+ If you look at the output on both tabs, you notice that the LangChain agent provided answers but they are estimates. The agent lacks the tools to answer this question. You may also get errors that the prompt is too long.
+
+ The deep agent, on the other hand can:
+
+ 1. **Plans its approach** using the built-in [`write_todos`](/oss/deepagents/harness#planning-capabilities) tool to break down the research task.
+ 1. **Loads the file** by calling the `fetch_text_from_url` tool to gather information.
+ 1. **Manages context** by using file system tools ([`grep`](/oss/deepagents/harness#virtual-filesystem-access) and [`read_file`](/oss/deepagents/harness#virtual-filesystem-access)).
+ 1. **Spawns subagents** as needed to delegate complex subtasks to specialized subagents.
+
+ For LangChain agents, you must implement more capabilities to get a similar level of service and can customize them along the way as needed.
+
-
-:::python
-```python
-from dataclasses import dataclass
-
-from langchain.agents import create_agent
-from langchain.chat_models import init_chat_model
-from langchain.tools import tool, ToolRuntime
-from langgraph.checkpoint.memory import InMemorySaver
-from langchain.agents.structured_output import ToolStrategy
-
-
-# Define system prompt
-SYSTEM_PROMPT = """You are an expert weather forecaster, who speaks in puns.
-
-You have access to two tools:
-
-- get_weather_for_location: use this to get the weather for a specific location
-- get_user_location: use this to get the user's location
-
-If a user asks you for the weather, make sure you know the location. If you can tell from the question that they mean wherever they are, use the get_user_location tool to find their location."""
-
-# Define context schema
-@dataclass
-class Context:
- """Custom runtime context schema."""
- user_id: str
-
-# Define tools
-@tool
-def get_weather_for_location(city: str) -> str:
- """Get weather for a given city."""
- return f"It's always sunny in {city}!"
-
-@tool
-def get_user_location(runtime: ToolRuntime[Context]) -> str:
- """Retrieve user information based on user ID."""
- user_id = runtime.context.user_id
- return "Florida" if user_id == "1" else "SF"
-
-# Configure model
-model = init_chat_model(
- "claude-sonnet-4-6",
- temperature=0
-)
-
-# Define response format
-@dataclass
-class ResponseFormat:
- """Response schema for the agent."""
- # A punny response (always required)
- punny_response: str
- # Any interesting information about the weather if available
- weather_conditions: str | None = None
-
-# Set up memory
-checkpointer = InMemorySaver()
-
-# Create agent
-agent = create_agent(
- model=model,
- system_prompt=SYSTEM_PROMPT,
- tools=[get_user_location, get_weather_for_location],
- context_schema=Context,
- response_format=ToolStrategy(ResponseFormat),
- checkpointer=checkpointer
-)
-
-# Run agent
-# `thread_id` is a unique identifier for a given conversation.
-config = {"configurable": {"thread_id": "1"}}
-
-response = agent.invoke(
- {"messages": [{"role": "user", "content": "what is the weather outside?"}]},
- config=config,
- context=Context(user_id="1")
-)
-
-print(response['structured_response'])
-# ResponseFormat(
-# punny_response="Florida is still having a 'sun-derful' day! The sunshine is playing 'ray-dio' hits all day long! I'd say it's the perfect weather for some 'solar-bration'! If you were hoping for rain, I'm afraid that idea is all 'washed up' - the forecast remains 'clear-ly' brilliant!",
-# weather_conditions="It's always sunny in Florida!"
-# )
-
-
-# Note that we can continue the conversation using the same `thread_id`.
-response = agent.invoke(
- {"messages": [{"role": "user", "content": "thank you!"}]},
- config=config,
- context=Context(user_id="1")
-)
-
-print(response['structured_response'])
-# ResponseFormat(
-# punny_response="You're 'thund-erfully' welcome! It's always a 'breeze' to help you stay 'current' with the weather. I'm just 'cloud'-ing around waiting to 'shower' you with more forecasts whenever you need them. Have a 'sun-sational' day in the Florida sunshine!",
-# weather_conditions=None
-# )
-```
-:::
+## Trace agent calls
+Most interesting applications you build with LangChain make many calls to LLMs. As these applications get more complex, it becomes important to be able to inspect what exactly is going on inside your agent. The best way to do this is with [LangSmith](https://smith.langchain.com).
-:::js
-```ts
-import { createAgent, tool, initChatModel, type ToolRuntime } from "langchain";
-import { MemorySaver } from "@langchain/langgraph";
-import * as z from "zod";
-
-// Define system prompt
-const systemPrompt = `You are an expert weather forecaster, who speaks in puns.
-
-You have access to two tools:
-
-- get_weather_for_location: use this to get the weather for a specific location
-- get_user_location: use this to get the user's location
-
-If a user asks you for the weather, make sure you know the location. If you can tell from the question that they mean wherever they are, use the get_user_location tool to find their location.`;
-
-// Define tools
-const getWeather = tool(
- ({ city }) => `It's always sunny in ${city}!`,
- {
- name: "get_weather_for_location",
- description: "Get the weather for a given city",
- schema: z.object({
- city: z.string(),
- }),
- }
-);
-
-type AgentRuntime = ToolRuntime;
-
-const getUserLocation = tool(
- (_, config: AgentRuntime) => {
- const { user_id } = config.context;
- return user_id === "1" ? "Florida" : "SF";
- },
- {
- name: "get_user_location",
- description: "Retrieve user information based on user ID",
- schema: z.object({}),
- }
-);
-
-// Configure model
-const model = await initChatModel(
- "claude-sonnet-4-6",
- { temperature: 0 }
-);
-
-// Define response format
-const responseFormat = z.object({
- punny_response: z.string(),
- weather_conditions: z.string().optional(),
-});
-
-// Set up memory
-const checkpointer = new MemorySaver();
-
-// Create agent
-const agent = createAgent({
- model,
- systemPrompt,
- responseFormat,
- checkpointer,
- tools: [getUserLocation, getWeather],
-});
-
-// Run agent
-// `thread_id` is a unique identifier for a given conversation.
-const config = {
- configurable: { thread_id: "1" },
- context: { user_id: "1" },
-};
-
-const response = await agent.invoke(
- { messages: [{ role: "user", content: "what is the weather outside?" }] },
- config
-);
-console.log(response.structuredResponse);
-// {
-// punny_response: "Florida is still having a 'sun-derful' day! The sunshine is playing 'ray-dio' hits all day long! I'd say it's the perfect weather for some 'solar-bration'! If you were hoping for rain, I'm afraid that idea is all 'washed up' - the forecast remains 'clear-ly' brilliant!",
-// weather_conditions: "It's always sunny in Florida!"
-// }
-
-// Note that we can continue the conversation using the same `thread_id`.
-const thankYouResponse = await agent.invoke(
- { messages: [{ role: "user", content: "thank you!" }] },
- config
-);
-console.log(thankYouResponse.structuredResponse);
-// {
-// punny_response: "You're 'thund-erfully' welcome! It's always a 'breeze' to help you stay 'current' with the weather. I'm just 'cloud'-ing around waiting to 'shower' you with more forecasts whenever you need them. Have a 'sun-sational' day in the Florida sunshine!",
-// weather_conditions: undefined
-// }
+Sign up for a [LangSmith](https://smith.langchain.com) account and set these to start logging traces:
+
+```shell
+export LANGSMITH_TRACING="true"
+export LANGSMITH_API_KEY="..."
```
-:::
-
+
+Once set, run your script again and then inspect what happened during your agent calls on [LangSmith](https://smith.langchain.com) .
- To learn how to trace your agent with LangSmith, see the [LangSmith documentation](/langsmith/trace-with-langchain).
+ To learn more about tracing your agent with LangSmith, see the [LangSmith documentation](/langsmith/trace-with-langchain).
-Congratulations! You now have an AI agent that can:
+## Next steps
+
+You now have agents that can:
- **Understand context** and remember conversations
-- **Use multiple tools** intelligently
+- **Use tools** intelligently
- **Provide structured responses** in a consistent format
- **Handle user-specific information** through context
- **Maintain conversation state** across interactions
+- **Plan, research, and synthesize** (deep agents only)
+
+Continue with:
+
+- **LangChain agents**: [Add and manage memory](/oss/langgraph/add-memory#manage-short-term-memory), [deploy to production](/oss/langgraph/deploy)
+- **Deep Agents**: [Customization options](/oss/deepagents/customization), [persistent memory](/oss/deepagents/long-term-memory), [deploy to production](/oss/langgraph/deploy)