📹 Coming from the YouTube video? This repository has been updated for LangGraph V1 (released October 2025). The core concepts remain the same, but the implementation has changed to use the new
Runtime[Context]pattern instead ofRunnableConfig. See the Migration Guide section below for details on what changed.
This project demonstrates how to implement runtime configuration patterns in ReAct Agents and supervisor-style architectures using LangGraph. It shows the progression from hardcoded agents to flexible, configurable systems.
This demo showcases three approaches to agent configuration:
- Hardcoded ReAct agent
- Fixed model, prompt, and tools
- Simple but inflexible
- Dynamic runtime configuration via
Runtime[Context] - Configurable models, prompts, and tools
- Uses
runtime.get("configurable")pattern (temporary workaround)
- Supervisor orchestrating multiple configured agents
- Each subagent uses the same configuration pattern
- Shows how configuration scales to complex architectures
⚠️ Note on Multi-Agent Pattern: This demo usescreate_supervisorfor easier visualization in LangGraph Studio. However, the recommended pattern for production is to use subagents as tools (tool calling pattern), which provides better control flow and type safety. We're keepingcreate_supervisorin this demo until LangGraph Studio adds better support for visualizing subagent-as-tools architectures.
The official LangChain multi-agent documentation recommends using the tool calling pattern where a supervisor agent calls other agents as tools. This provides:
- ✅ Centralized control flow: All routing passes through the calling agent
- ✅ Better type safety: Tools have explicit input/output schemas
- ✅ Cleaner context management: Fine-grained control over what each agent sees
- ✅ Easier debugging: Clear execution path through supervisor
Example of recommended pattern:
from langchain.tools import tool
@tool("subagent_name", description="What this agent does")
def call_subagent(query: str):
result = subagent.invoke({"messages": [{"role": "user", "content": query}]})
return result["messages"][-1].content
supervisor = create_agent(model=model, tools=[call_subagent])This repository currently uses create_supervisor instead of the recommended tool calling pattern because:
- Better Studio visualization: LangGraph Studio has excellent support for visualizing supervisor graphs
- Clearer architecture: Easier to see how agents interact in the UI
- Educational clarity: Simpler for learning multi-agent concepts
For production systems, we recommend migrating to the tool calling pattern as shown in the multi-agent documentation.
- Start simple: Hardcoded values for quick prototyping
- Add flexibility: Runtime configuration for different use cases
- Scale complexity: Same configuration patterns across multiple agents
- Context schemas: Typed classes (Pydantic, TypedDict, dataclass, etc.) define available configuration options
- Runtime parameter:
runtime: Runtime[Context]provides typed access (coming soon in API) - Current workaround: Use
runtime.get("configurable", {})to access configuration dict - Future pattern: Direct
runtime.contextaccess for typed configuration values - Default values: Defined in your chosen schema type (e.g., Pydantic Fields, dataclass defaults)
- Reusable functions: Same
make_graph(runtime)pattern everywhere
If you're coming from the YouTube video (recorded Jul 2, 2025), here are the key changes:
Old Pattern (from video):
from langchain_core.runnables import RunnableConfig
from langgraph.prebuilt import create_react_agent
async def make_graph(config: RunnableConfig):
configurable = config.get("configurable", {})
llm = configurable.get("model", "openai/gpt-4")
selected_tools = configurable.get("selected_tools", ["get_todays_date"])
prompt = configurable.get("system_prompt", "You are a helpful assistant.")
graph = create_react_agent(
model=load_chat_model(llm),
tools=get_tools(selected_tools),
prompt=prompt
)
return graphNew Pattern (current):
from langgraph.runtime import Runtime
from pydantic import BaseModel, Field
from langchain.agents import create_agent
# Define Context schema (this example uses Pydantic, but TypedDict, dataclass, etc. also work)
class Context(BaseModel):
model: str = Field(default="openai:gpt-4")
selected_tools: list[str] = Field(default=["get_todays_date"])
system_prompt: str = Field(default="You are a helpful assistant.")
async def make_graph(runtime: Runtime[Context]):
# Temporary workaround: extract configurable dict from runtime
# (runtime.context access coming soon in future API update)
configurable = runtime.get("configurable", {})
graph = create_agent(
model=init_chat_model(configurable.get("model", "openai:gpt-4")),
tools=get_tools(configurable.get("selected_tools", ["get_todays_date"])),
prompt=configurable.get("system_prompt", "You are a helpful assistant."),
context_schema=Context
)
return graph
⚠️ Note on Context Access: Theruntime.contextpattern isn't accessible in functions quite yet, but is coming in a future API update. That will be the recommended pattern for rebuilding graphs at runtime using typed context. Today we use a slight workaround by extracting the configurable dict withruntime.get("configurable", {})and providing default values.
Note on Agent Creation: The video used create_react_agent from langgraph.prebuilt. This has been replaced with create_agent from langchain.agents as part of LangGraph V1's consolidation of agent functionality into the LangChain library.
Note on Model Loading: The video used a custom load_chat_model() helper function with provider/model format (e.g., "openai/gpt-4"). This has been replaced with direct use of init_chat_model() which now supports provider:model format (e.g., "openai:gpt-5"), eliminating the need for the helper function.
| Aspect | Old (Video) | New (Current) |
|---|---|---|
| Config Type | RunnableConfig (dict-based) |
Runtime[Context] (typed object) |
| Function | create_react_agent |
create_agent |
| Import | from langgraph.prebuilt |
from langchain.agents |
| Schema Definition | Optional Configuration class |
Required Context class (examples use Pydantic) |
| Schema Usage | Optional config_schema param |
Required context_schema param |
| Access | config.get("configurable", {}) |
runtime.get("configurable", {}) (temp workaround) |
| Future Access | N/A | runtime.context (coming soon) |
| Type Safety | Runtime checks | Compile-time type hints via Context schemas |
LangGraph V1 introduced stronger typing and cleaner APIs:
- ✅ Better IDE support - autocomplete and type hints with typed Context schemas
- ✅ Type safety - catch configuration errors with explicit Context definitions
- ✅ Clearer APIs - explicit context schemas define available options
- ✅ Flexibility - use Pydantic, TypedDict, dataclass, or any typed class
- ✅ Future ready - positioned for
runtime.contextdirect access when API is updated
The core concepts from the video remain valid - the way you think about configuring agents and building supervisor architectures hasn't changed, just the implementation details.
from langgraph.runtime import Runtime
from pydantic import BaseModel, Field
# Define Context schema (using Pydantic in this example)
class Context(BaseModel):
model: str = Field(default="anthropic:claude-haiku-4-5")
system_prompt: str = Field(default="You are a helpful AI assistant.")
selected_tools: list[str] = Field(default=["get_todays_date"])
async def make_graph(runtime: Runtime[Context]):
# Current workaround: extract configurable dict
# Future: runtime.context will provide typed access
configurable = runtime.get("configurable", {})
return create_agent(
model=init_chat_model(configurable.get("model", "anthropic:claude-haiku-4-5")),
tools=get_tools(configurable.get("selected_tools", ["get_todays_date"])),
prompt=configurable.get("system_prompt", "You are a helpful AI assistant."),
context_schema=Context
)async def create_subagents(runtime: Runtime[SupervisorContext]):
# Current workaround: extract configurable dict
configurable = runtime.get("configurable", {})
# Create subagents with their own configurations
finance_agent = await make_graph({
"configurable": {
"model": configurable.get("finance_model", "anthropic:claude-haiku-4-5"),
"system_prompt": configurable.get("finance_system_prompt", "You are a financial research assistant."),
"selected_tools": configurable.get("finance_tools", ["finance_research", "basic_research", "get_todays_date"])
}
})
# ... more agents using same pattern- IDE autocomplete and type hints with typed Context schemas
- Compile-time type checking with typed Context objects
- Optional validation with Pydantic if desired
- Future: Direct
runtime.contextaccess (coming soon)
- Clean Context schemas define available configuration options
- Straightforward configuration extraction pattern
- Easy to understand and modify
- Same pattern for single and multi-agent systems
- Reusable
make_graph()function - Predictable configuration structure
- Runtime configuration changes
- Easy to add new configuration options
- Works with LangGraph Studio
- Pattern works from simple to complex architectures
- No architectural debt when scaling up
- Clean separation of concerns
Assuming you have already installed LangGraph Studio, to set up:
-
Install dependencies:
# Create and activate a virtual environment and install dependencies. uv sync source .venv/bin/activate
-
Create a
.envfile:cp .env.example .env
-
Define required API keys in your
.envfile. -
Run LangGraph Studio Locally
langgraph dev
The primary search tool uses Tavily. Create an API key here.
The defaults values for model are shown below:
model: anthropic/claude-3-5-sonnet-latestFollow the instructions below to get set up, or pick one of the additional options.
To use Anthropic's chat models:
- Sign up for an Anthropic API key if you haven't already.
- Once you have your API key, add it to your
.envfile:
ANTHROPIC_API_KEY=your-api-key
To use OpenAI's chat models:
- Sign up for an OpenAI API key.
- Once you have your API key, add it to your
.envfile:
OPENAI_API_KEY=your-api-key
- Customize whatever you'd like in the code.
- Open the folder in LangGraph Studio!
Examine agents/react_agent/graph_without_context.py to see the hardcoded baseline.
Look at agents/react_agent/graph.py to see how runtime configuration is added while keeping the code simple.
Explore agents/supervisor/ to see how the same runtime configuration patterns work with multiple specialized agents.
Note: The supervisor examples use
create_supervisorfor visualization purposes. See the Multi-Agent Patterns section above for the recommended production approach using subagents as tools.
- Direct dictionary access over complex configuration classes
- Default values for graceful fallbacks
- Consistent patterns across different complexity levels
- Runtime flexibility without architectural complexity
While iterating on your configuration:
- Test different models and prompts via configuration
- Add new tools by updating the
selected_toolslist - Create new agent types using the same configuration pattern
- Debug configuration issues in LangGraph Studio
You can find the latest LangChain, LangGraph and LangSmith documentation here, including examples and references for configuration patterns.
This repository demonstrates configuration patterns for LangGraph V1 (October 2024). It has been updated from the original YouTube video version to use the new Runtime[Context] pattern with typed context schemas (using Pydantic in the examples).
Current Implementation Note: The code uses runtime.get("configurable", {}) as a temporary workaround to access configuration values. The recommended runtime.context pattern for direct typed access is coming in a future API update and will be the standard way to rebuild graphs at runtime using typed context.
The migration from RunnableConfig → Runtime[Context] represents LangGraph's evolution toward stronger typing and better developer experience. Note that while this repository uses Pydantic for context schemas, you can use TypedDict, dataclass, or any typed class.