Skip to content

langchain-samples/assistants-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LangGraph Configuration Patterns Demo

📹 Coming from the YouTube video? This repository has been updated for LangGraph V1 (released October 2025). The core concepts remain the same, but the implementation has changed to use the new Runtime[Context] pattern instead of RunnableConfig. See the Migration Guide section below for details on what changed.

This project demonstrates how to implement runtime configuration patterns in ReAct Agents and supervisor-style architectures using LangGraph. It shows the progression from hardcoded agents to flexible, configurable systems.

Configuration Pattern Progression

This demo showcases three approaches to agent configuration:

1. No Configuration (agents/react_agent/graph_without_context.py)

  • Hardcoded ReAct agent
  • Fixed model, prompt, and tools
  • Simple but inflexible

2. Single Agent Configuration (agents/react_agent/graph.py)

  • Dynamic runtime configuration via Runtime[Context]
  • Configurable models, prompts, and tools
  • Uses runtime.get("configurable") pattern (temporary workaround)

3. Multi-Agent Configuration (agents/supervisor/)

  • Supervisor orchestrating multiple configured agents
  • Each subagent uses the same configuration pattern
  • Shows how configuration scales to complex architectures

⚠️ Note on Multi-Agent Pattern: This demo uses create_supervisor for easier visualization in LangGraph Studio. However, the recommended pattern for production is to use subagents as tools (tool calling pattern), which provides better control flow and type safety. We're keeping create_supervisor in this demo until LangGraph Studio adds better support for visualizing subagent-as-tools architectures.

Multi-Agent Patterns

Recommended: Subagents as Tools

The official LangChain multi-agent documentation recommends using the tool calling pattern where a supervisor agent calls other agents as tools. This provides:

  • Centralized control flow: All routing passes through the calling agent
  • Better type safety: Tools have explicit input/output schemas
  • Cleaner context management: Fine-grained control over what each agent sees
  • Easier debugging: Clear execution path through supervisor

Example of recommended pattern:

from langchain.tools import tool

@tool("subagent_name", description="What this agent does")
def call_subagent(query: str):
    result = subagent.invoke({"messages": [{"role": "user", "content": query}]})
    return result["messages"][-1].content

supervisor = create_agent(model=model, tools=[call_subagent])

Why This Demo Uses create_supervisor

This repository currently uses create_supervisor instead of the recommended tool calling pattern because:

  1. Better Studio visualization: LangGraph Studio has excellent support for visualizing supervisor graphs
  2. Clearer architecture: Easier to see how agents interact in the UI
  3. Educational clarity: Simpler for learning multi-agent concepts

For production systems, we recommend migrating to the tool calling pattern as shown in the multi-agent documentation.

What it demonstrates

Configuration Evolution

  1. Start simple: Hardcoded values for quick prototyping
  2. Add flexibility: Runtime configuration for different use cases
  3. Scale complexity: Same configuration patterns across multiple agents

Key Configuration Patterns

  • Context schemas: Typed classes (Pydantic, TypedDict, dataclass, etc.) define available configuration options
  • Runtime parameter: runtime: Runtime[Context] provides typed access (coming soon in API)
  • Current workaround: Use runtime.get("configurable", {}) to access configuration dict
  • Future pattern: Direct runtime.context access for typed configuration values
  • Default values: Defined in your chosen schema type (e.g., Pydantic Fields, dataclass defaults)
  • Reusable functions: Same make_graph(runtime) pattern everywhere

Migration from Video to Current Version

If you're coming from the YouTube video (recorded Jul 2, 2025), here are the key changes:

What Changed in LangGraph V1

Old Pattern (from video):

from langchain_core.runnables import RunnableConfig
from langgraph.prebuilt import create_react_agent

async def make_graph(config: RunnableConfig):
    configurable = config.get("configurable", {})
    llm = configurable.get("model", "openai/gpt-4")
    selected_tools = configurable.get("selected_tools", ["get_todays_date"])
    prompt = configurable.get("system_prompt", "You are a helpful assistant.")
    
    graph = create_react_agent(
        model=load_chat_model(llm), 
        tools=get_tools(selected_tools),
        prompt=prompt
    )
    return graph

New Pattern (current):

from langgraph.runtime import Runtime
from pydantic import BaseModel, Field
from langchain.agents import create_agent

# Define Context schema (this example uses Pydantic, but TypedDict, dataclass, etc. also work)
class Context(BaseModel):
    model: str = Field(default="openai:gpt-4")
    selected_tools: list[str] = Field(default=["get_todays_date"])
    system_prompt: str = Field(default="You are a helpful assistant.")

async def make_graph(runtime: Runtime[Context]):
    # Temporary workaround: extract configurable dict from runtime
    # (runtime.context access coming soon in future API update)
    configurable = runtime.get("configurable", {})
    
    graph = create_agent(
        model=init_chat_model(configurable.get("model", "openai:gpt-4")), 
        tools=get_tools(configurable.get("selected_tools", ["get_todays_date"])),
        prompt=configurable.get("system_prompt", "You are a helpful assistant."),
        context_schema=Context
    )
    return graph

⚠️ Note on Context Access: The runtime.context pattern isn't accessible in functions quite yet, but is coming in a future API update. That will be the recommended pattern for rebuilding graphs at runtime using typed context. Today we use a slight workaround by extracting the configurable dict with runtime.get("configurable", {}) and providing default values.

Note on Agent Creation: The video used create_react_agent from langgraph.prebuilt. This has been replaced with create_agent from langchain.agents as part of LangGraph V1's consolidation of agent functionality into the LangChain library.

Note on Model Loading: The video used a custom load_chat_model() helper function with provider/model format (e.g., "openai/gpt-4"). This has been replaced with direct use of init_chat_model() which now supports provider:model format (e.g., "openai:gpt-5"), eliminating the need for the helper function.

Key Differences

Aspect Old (Video) New (Current)
Config Type RunnableConfig (dict-based) Runtime[Context] (typed object)
Function create_react_agent create_agent
Import from langgraph.prebuilt from langchain.agents
Schema Definition Optional Configuration class Required Context class (examples use Pydantic)
Schema Usage Optional config_schema param Required context_schema param
Access config.get("configurable", {}) runtime.get("configurable", {}) (temp workaround)
Future Access N/A runtime.context (coming soon)
Type Safety Runtime checks Compile-time type hints via Context schemas

Why the Change?

LangGraph V1 introduced stronger typing and cleaner APIs:

  • Better IDE support - autocomplete and type hints with typed Context schemas
  • Type safety - catch configuration errors with explicit Context definitions
  • Clearer APIs - explicit context schemas define available options
  • Flexibility - use Pydantic, TypedDict, dataclass, or any typed class
  • Future ready - positioned for runtime.context direct access when API is updated

The core concepts from the video remain valid - the way you think about configuring agents and building supervisor architectures hasn't changed, just the implementation details.

Configuration in Action

Single Agent Configuration

from langgraph.runtime import Runtime
from pydantic import BaseModel, Field

# Define Context schema (using Pydantic in this example)
class Context(BaseModel):
    model: str = Field(default="anthropic:claude-haiku-4-5")
    system_prompt: str = Field(default="You are a helpful AI assistant.")
    selected_tools: list[str] = Field(default=["get_todays_date"])

async def make_graph(runtime: Runtime[Context]):
    # Current workaround: extract configurable dict
    # Future: runtime.context will provide typed access
    configurable = runtime.get("configurable", {})
    
    return create_agent(
        model=init_chat_model(configurable.get("model", "anthropic:claude-haiku-4-5")), 
        tools=get_tools(configurable.get("selected_tools", ["get_todays_date"])), 
        prompt=configurable.get("system_prompt", "You are a helpful AI assistant."),
        context_schema=Context
    )

Multi-Agent Configuration

async def create_subagents(runtime: Runtime[SupervisorContext]):
    # Current workaround: extract configurable dict
    configurable = runtime.get("configurable", {})
    
    # Create subagents with their own configurations
    finance_agent = await make_graph({
        "configurable": {
            "model": configurable.get("finance_model", "anthropic:claude-haiku-4-5"),
            "system_prompt": configurable.get("finance_system_prompt", "You are a financial research assistant."),
            "selected_tools": configurable.get("finance_tools", ["finance_research", "basic_research", "get_todays_date"])
        }
    })
    # ... more agents using same pattern

Why This Approach?

Type Safety

  • IDE autocomplete and type hints with typed Context schemas
  • Compile-time type checking with typed Context objects
  • Optional validation with Pydantic if desired
  • Future: Direct runtime.context access (coming soon)

Simplicity

  • Clean Context schemas define available configuration options
  • Straightforward configuration extraction pattern
  • Easy to understand and modify

Consistency

  • Same pattern for single and multi-agent systems
  • Reusable make_graph() function
  • Predictable configuration structure

Flexibility

  • Runtime configuration changes
  • Easy to add new configuration options
  • Works with LangGraph Studio

Scalability

  • Pattern works from simple to complex architectures
  • No architectural debt when scaling up
  • Clean separation of concerns

Getting Started

Assuming you have already installed LangGraph Studio, to set up:

  1. Install dependencies:

    # Create and activate a virtual environment and install dependencies.
    uv sync
    source .venv/bin/activate
  2. Create a .env file:

    cp .env.example .env
  3. Define required API keys in your .env file.

  4. Run LangGraph Studio Locally

    langgraph dev

The primary search tool uses Tavily. Create an API key here.

Setup Model

The defaults values for model are shown below:

model: anthropic/claude-3-5-sonnet-latest

Follow the instructions below to get set up, or pick one of the additional options.

Anthropic

To use Anthropic's chat models:

  1. Sign up for an Anthropic API key if you haven't already.
  2. Once you have your API key, add it to your .env file:
ANTHROPIC_API_KEY=your-api-key

OpenAI

To use OpenAI's chat models:

  1. Sign up for an OpenAI API key.
  2. Once you have your API key, add it to your .env file:
OPENAI_API_KEY=your-api-key
  1. Customize whatever you'd like in the code.
  2. Open the folder in LangGraph Studio!

Exploring the Configuration Patterns

Start with No Configuration

Examine agents/react_agent/graph_without_context.py to see the hardcoded baseline.

Add Single Agent Configuration

Look at agents/react_agent/graph.py to see how runtime configuration is added while keeping the code simple.

Scale to Multi-Agent Configuration

Explore agents/supervisor/ to see how the same runtime configuration patterns work with multiple specialized agents.

Note: The supervisor examples use create_supervisor for visualization purposes. See the Multi-Agent Patterns section above for the recommended production approach using subagents as tools.

Development

Configuration Best Practices Shown

  • Direct dictionary access over complex configuration classes
  • Default values for graceful fallbacks
  • Consistent patterns across different complexity levels
  • Runtime flexibility without architectural complexity

Local Development

While iterating on your configuration:

  • Test different models and prompts via configuration
  • Add new tools by updating the selected_tools list
  • Create new agent types using the same configuration pattern
  • Debug configuration issues in LangGraph Studio

Documentation

You can find the latest LangChain, LangGraph and LangSmith documentation here, including examples and references for configuration patterns.

About This Repository

This repository demonstrates configuration patterns for LangGraph V1 (October 2024). It has been updated from the original YouTube video version to use the new Runtime[Context] pattern with typed context schemas (using Pydantic in the examples).

Current Implementation Note: The code uses runtime.get("configurable", {}) as a temporary workaround to access configuration values. The recommended runtime.context pattern for direct typed access is coming in a future API update and will be the standard way to rebuild graphs at runtime using typed context.

The migration from RunnableConfigRuntime[Context] represents LangGraph's evolution toward stronger typing and better developer experience. Note that while this repository uses Pydantic for context schemas, you can use TypedDict, dataclass, or any typed class.

About

This project demonstrates how to implement runtime configuration patterns in ReAct Agents and supervisor-style architectures using LangGraph. It shows the progression from hardcoded agents to flexible, configurable systems.

Resources

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors