Skip to content

Commit 79eaac2

Browse files
authored
Create langchain-vs-langgraph.md (#467)
1 parent 98bbd6e commit 79eaac2

File tree

1 file changed

+248
-0
lines changed

1 file changed

+248
-0
lines changed

blog/en/langchain-vs-langgraph.md

Lines changed: 248 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,248 @@
1+
---
2+
id: langchain-vs-langgraph.md
3+
title: >
4+
LangChain vs LangGraph: A Developer's Guide to Choosing Your AI Frameworks
5+
author: Min Yin
6+
date: 2025-09-09
7+
desc: Compare LangChain and LangGraph for LLM apps. See how they differ in architecture, state management, and use cases — plus when to use each.
8+
cover: assets.zilliz.com/Chat_GPT_Image_Sep_9_2025_09_42_12_PM_1_49154d15cc.png
9+
tag: Engineering
10+
recommend: false
11+
publishToMedium: true
12+
tags: Milvus, vector database, langchain, langgraph
13+
meta_keywords: Milvus, vector database, langchain, langgraph, langchain vs langgraph
14+
meta_title: >
15+
LangChain vs LangGraph: A Developer's Guide to Choosing Your AI Frameworks
16+
origin: https://milvus.io/blog/langchain-vs-langgraph.md
17+
---
18+
19+
When building with large language models (LLMs), the framework you choose has a huge impact on your development experience. A good framework streamlines workflows, reduces boilerplate, and makes it easier to move from prototype to production. A poor fit can do the opposite, adding friction and technical debt.
20+
21+
Two of the most popular options today are [**LangChain**](https://python.langchain.com/docs/introduction/) and [**LangGraph**](https://langchain-ai.github.io/langgraph/) — both open source and created by the LangChain team. LangChain focuses on component orchestration and workflow automation, making it a good fit for common use cases like retrieval-augmented generation ([RAG](https://zilliz.com/learn/Retrieval-Augmented-Generation)). LangGraph builds on top of LangChain with a graph-based architecture, which is better suited for stateful applications, complex decision-making, and multi-agent coordination.
22+
23+
In this guide, we’ll compare the two frameworks side by side: how they work, their strengths, and the types of projects they’re best suited for. By the end, you’ll have a clearer sense of which one makes the most sense for your needs.
24+
25+
26+
## LangChain: Your Component Library and LCEL Orchestration Powerhouse
27+
28+
[**LangChain**](https://github.com/langchain-ai/langchain) is an open-source framework designed to make building LLM applications more manageable. You can think of it as the middleware that sits between your model (say, OpenAI’s [GPT-5](https://milvus.io/blog/gpt-5-review-accuracy-up-prices-down-code-strong-but-bad-for-creativity.md) or Anthropic’s [Claude](https://milvus.io/blog/claude-code-vs-gemini-cli-which-ones-the-real-dev-co-pilot.md)) and your actual app. Its main job is to help you _chain together_ all the moving parts: prompts, external APIs, [vector databases](https://zilliz.com/learn/what-is-vector-database), and custom business logic.
29+
30+
Take RAG as an example. Instead of wiring everything from scratch, LangChain gives you ready-made abstractions to connect an LLM with a vector store (like [Milvus](https://milvus.io/) or [Zilliz Cloud](https://zilliz.com/cloud)), run semantic search, and feed results back into your prompt. Beyond that, it offers utilities for prompt templates, agents that can call tools, and orchestration layers that keep complex workflows maintainable.
31+
32+
**What makes LangChain stand out?**
33+
34+
- **Rich component library** – Document loaders, text splitters, vector storage connectors, model interfaces, and more.
35+
36+
- **LCEL (LangChain Expression Language) orchestration** – A declarative way to mix and match components with less boilerplate.
37+
38+
- **Easy integration** – Works smoothly with APIs, databases, and third-party tools.
39+
40+
- **Mature ecosystem** – Strong documentation, examples, and an active community.
41+
42+
43+
## LangGraph: Your Go-To for Stateful Agent Systems
44+
45+
[LangGraph](https://github.com/langchain-ai/langgraph) is a specialized extension of LangChain that focuses on stateful applications. Instead of writing workflows as a linear script, you define them as a graph of nodes and edges — essentially a state machine. Each node represents an action (like calling an LLM, querying a database, or checking a condition), while the edges define how the flow moves depending on the results. This structure makes it easier to handle loops, branching, and retries without your code turning into a tangle of if/else statements.
46+
47+
This approach is especially useful for advanced use cases such as copilots and [autonomous agents](https://zilliz.com/blog/what-exactly-are-ai-agents-why-openai-and-langchain-are-fighting-over-their-definition). These systems often need to keep track of memory, handle unexpected results, or make decisions dynamically. By modeling the logic explicitly as a graph, LangGraph makes these behaviors more transparent and maintainable.
48+
49+
**Core features of LangGraph include:**
50+
51+
- **Graph-based architecture** – Native support for loops, backtracking, and complex control flows.
52+
53+
- **State management** – Centralized state ensures context is preserved across steps.
54+
55+
- **Multi-agent support** – Built for scenarios where multiple agents collaborate or coordinate.
56+
57+
- **Debugging tools** – Visualization and debugging via LangSmith Studio to trace graph execution.
58+
59+
60+
## LangChain vs LangGraph: Technical Deep Dive
61+
62+
### Architecture 
63+
64+
LangChain uses **LCEL (LangChain Expression Language)** to wire components together in a linear pipeline. It’s declarative, readable, and great for straightforward workflows like RAG.
65+
66+
67+
```
68+
# LangChain LCEL orchestration example
69+
from langchain.prompts import ChatPromptTemplate
70+
from langchain.chat_models import ChatOpenAI
71+
72+
prompt = ChatPromptTemplate.from_template("Please answer the following question: {question}")
73+
model = ChatOpenAI()
74+
75+
# LCEL chain orchestration
76+
chain = prompt | model
77+
78+
# Run the chain
79+
result = chain.invoke({"question": "What is artificial intelligence?"})
80+
```
81+
82+
83+
LangGraph takes a different approach: workflows are expressed as a **graph of nodes and edges**. Each node defines an action, and the graph engine manages state, branching, and retries.
84+
85+
```
86+
# LangGraph graph structure definition
87+
from langgraph.graph import StateGraph
88+
from typing import TypedDict
89+
90+
class State(TypedDict):
91+
messages: list
92+
current_step: str
93+
94+
def node_a(state: State) -> State:
95+
return {"messages": state["messages"] + ["Processing A"], "current_step": "A"}
96+
97+
def node_b(state: State) -> State:
98+
return {"messages": state["messages"] + ["Processing B"], "current_step": "B"}
99+
100+
graph = StateGraph(State)
101+
graph.add_node("node_a", node_a)
102+
graph.add_node("node_b", node_b)
103+
graph.add_edge("node_a", "node_b")
104+
```
105+
106+
107+
Where LCEL gives you a clean linear pipeline, LangGraph natively supports loops, branching, and conditional flows. This makes it a stronger fit for **agent-like systems** or multi-step interactions that don’t follow a straight line.
108+
109+
110+
### State Management
111+
112+
- **LangChain**: Uses Memory components for passing context. Works fine for simple multi-turn conversations or linear workflows.
113+
114+
- **LangGraph**: Uses a centralized state system that supports rollbacks, backtracking, and detailed history. Essential for long-running, stateful apps where context continuity matters.
115+
116+
117+
### Execution Models
118+
119+
| **Feature** | **LangChain** | **LangGraph** |
120+
| --------------------- | ------------------------------ | -------------------------- |
121+
| Execution Mode | Linear orchestration  | Stateful (Graph) Execution |
122+
| Loop Support | Limited Support | Native Support |
123+
| Conditional Branching | Implemented via RunnableMap | Native Support |
124+
| Exception Handling | Implemented via RunnableBranch | Built-in Support |
125+
| Error Processing | Chain-style Transmission | Node-level Processing |
126+
127+
128+
## Real-World Use Cases: When to Use Each
129+
130+
Frameworks aren’t just about architecture — they shine in different situations. So the real question is: when should you reach for LangChain, and when does LangGraph make more sense? Let’s look at some practical scenarios.
131+
132+
133+
### When LangChain Is Your Best Choice
134+
135+
#### 1. Straightforward Task Processing
136+
137+
LangChain is a great fit when you need to transform input into output without heavy state tracking or branching logic. For example, a browser extension that translates selected text:
138+
139+
```
140+
# Implementing simple text translation using LCEL
141+
from langchain.prompts import ChatPromptTemplate
142+
from langchain.chat_models import ChatOpenAI
143+
144+
prompt = ChatPromptTemplate.from_template("Translate the following text to English: {text}")
145+
model = ChatOpenAI()
146+
chain = prompt | model
147+
148+
result = chain.invoke({"text": "Hello, World!"})
149+
```
150+
151+
152+
In this case, there’s no need for memory, retries, or multi-step reasoning — just efficient input-to-output transformation. LangChain keeps the code clean and focused.
153+
154+
155+
#### 2. Foundation Components
156+
157+
LangChain provides rich basic components that can serve as building blocks for constructing more complex systems. Even when teams build with LangGraph, they often rely on LangChain’s mature components. The framework offers:
158+
159+
- **Vector store connectors** – Unified interfaces for databases like Milvus and Zilliz Cloud.
160+
161+
- **Document loaders & splitters** – For PDFs, web pages, and other content.
162+
163+
- **Model interfaces** – Standardized wrappers for popular LLMs.
164+
165+
This makes LangChain not only a workflow tool but also a reliable component library for larger systems.
166+
167+
168+
### When LangGraph Is the Clear Winner
169+
170+
#### 1. Sophisticated Agent Development
171+
172+
LangGraph excels when you’re building advanced agent systems that need to loop, branch, and adapt. Here’s a simplified agent pattern:
173+
174+
```
175+
# Simplified Agent system example
176+
def agent(state):
177+
messages = state["messages"]
178+
# Agent thinks and decides next action
179+
action = decide_action(messages)
180+
return {"action": action, "messages": messages}
181+
182+
def tool_executor(state):
183+
# Execute tool calls
184+
action = state["action"]
185+
result = execute_tool(action)
186+
return {"result": result, "messages": state["messages"] + [result]}
187+
188+
# Build Agent graph
189+
graph = StateGraph()
190+
graph.add_node("agent", agent)
191+
graph.add_node("tool_executor", tool_executor)
192+
graph.add_edge("agent", "tool_executor")
193+
graph.add_edge("tool_executor", "agent")
194+
```
195+
196+
197+
**Example:** GitHub Copilot X's advanced features perfectly demonstrate LangGraph's agent architecture in action. The system understands developer intent, breaks complex programming tasks into manageable steps, executes multiple operations in sequence, learns from intermediate results, and adapts its approach based on what it discovers along the way.
198+
199+
200+
#### 2. Advanced Multi-Turn Conversation Systems
201+
202+
LangGraph's state management capabilities make it very suitable for building complex multi-turn conversation systems:
203+
204+
- **Customer service systems**: Capable of tracking conversation history, understanding context, and providing coherent responses
205+
206+
- **Educational tutoring systems**: Adjusting teaching strategies based on students' answer history
207+
208+
- **Interview simulation systems**: Adjusting interview questions based on candidates' responses
209+
210+
**Example:** Duolingo's AI tutoring system showcases this perfectly. The system continuously analyzes each learner's response patterns, identifies specific knowledge gaps, tracks learning progress across multiple sessions, and delivers personalized language learning experiences that adapt to individual learning styles, pace preferences, and areas of difficulty.
211+
212+
213+
#### 3. Multi-Agent Collaboration Ecosystems
214+
215+
LangGraph natively supports ecosystems where multiple agents coordinate. Examples include:
216+
217+
- **Team collaboration simulation**: Multiple roles collaboratively completing complex tasks
218+
219+
- **Debate systems**: Multiple roles holding different viewpoints engaging in debate
220+
221+
- **Creative collaboration platforms**: Intelligent agents from different professional domains creating together
222+
223+
This approach has shown promise in research domains like drug discovery, where agents model different areas of expertise and combine results into new insights.
224+
225+
226+
### Making the Right Choice: A Decision Framework
227+
228+
| **Project Characteristics** | **Recommended Framework** | **Why** |
229+
| ------------------------------- | ------------------------- | ------------------------------------------ |
230+
| Simple One-Time Tasks | LangChain | LCEL orchestration is simple and intuitive |
231+
| Text Translation/Optimization | LangChain | No need for complex state management |
232+
| Agent Systems | LangGraph | Powerful state management and control flow |
233+
| Multi-Turn Conversation Systems | LangGraph | State tracking and context management |
234+
| Multi-Agent Collaboration | LangGraph | Native support for multi-node interaction |
235+
| Systems Requiring Tool Usage | LangGraph | Flexible tool invocation flow control |
236+
237+
238+
## Conclusion
239+
240+
In most cases, LangChain and LangGraph are complementary, not competitors. LangChain gives you a solid foundation of components and LCEL orchestration — great for quick prototypes, stateless tasks, or projects that just need clean input-to-output flows. LangGraph steps in when your application outgrows that linear model and requires state, branching, or multiple agents working together.
241+
242+
- **Choose LangChain** if your focus is on straightforward tasks like text translation, document processing, or data transformation, where each request stands on its own.
243+
244+
- **Choose LangGraph** if you’re building multi-turn conversations, agent systems, or collaborative agent ecosystems where context and decision-making matter.
245+
246+
- **Mix both** for the best results. Many production systems start with LangChain’s components (document loaders, vector store connectors, model interfaces) and then add LangGraph to manage stateful, graph-driven logic on top.
247+
248+
Ultimately, it’s less about chasing trends and more about aligning the framework with your project’s genuine needs. Both ecosystems are evolving rapidly, driven by active communities and robust documentation. By understanding where each fits, you’ll be better equipped to design applications that scale — whether you’re building your first RAG pipeline with Milvus or orchestrating a complex multi-agent system.

0 commit comments

Comments
 (0)