LangGraph-UP Monorepo showcases how to build production-ready LangGraph agents using the latest LangChain & LangGraph V1 ecosystem, organized in a clean monorepo structure with shared libraries and multiple agent applications.
- π Universal Model Loading - OpenRouter, Qwen, QwQ, SiliconFlow with automatic registration
- π€ Multi-Agent Orchestration - Supervisor & deep research patterns with specialized sub-agents
- π LangChain v1.0 Middleware - Model switching, file masking, summarization with v1.0 pattern
- π¬ Deep Research Agent - Advanced research workflow with deepagents integration
- π§ͺ Developer Experience - Hot reload, comprehensive testing, strict linting, PyPI publishing
- π Deployment Ready - LangGraph Cloud configurations included
- π Global Ready - Region-based provider configuration (PRC/International)
# Install UV package manager
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone and setup
git clone https://github.com/webup/langgraph-up-monorepo.git
cd langgraph-up-monorepo
uv sync --devfrom langgraph_up_devkits import load_chat_model
# Zero-setup model loading across providers
model = load_chat_model("openrouter:anthropic/claude-sonnet-4")
# model = load_chat_model("qwen:qwen-flash")
# model = load_chat_model("siliconflow:Qwen/Qwen3-8B")
# Start building your agent
from sample_agent import make_graph
app = make_graph()
result = await app.ainvoke({"messages": [{"role": "user", "content": "What's 25 * 4?"}]})This monorepo includes two complete agent examples demonstrating different patterns:
Multi-agent system with a coordinator that delegates to specialized sub-agents.
make dev sample-agentFeatures:
- Supervisor-based coordination
- Math expert (add, multiply operations)
- Research expert (web search capabilities)
- Cross-agent handoffs
Advanced research workflow with virtual file system and structured planning.
make dev sample-deep-agentFeatures:
- Deep web search with content extraction
- Virtual file system for document management
- Think tool for strategic TODO planning
- Research & critique sub-agents
- FileSystemMaskMiddleware to optimize token usage
langgraph-up-monorepo/
βββ libs/
β βββ shared/ # Shared utilities
β βββ common/ # Common helper functions
β βββ langgraph-up-devkits/ # π― Core framework (published to PyPI)
β βββ utils/providers.py # β Multi-provider model loading
β βββ middleware/ # β Custom middleware (model, file, summary)
β βββ tools/ # β Web search, deep search, MCP integration
β βββ context/ # β Context schemas & aware prompts
βββ apps/
β βββ sample-agent/ # π€ Supervisor pattern (math + research agents)
β β βββ src/sample_agent/
β β β βββ graph.py # β Main supervisor graph
β β β βββ subagents/ # β Math & research experts
β β β βββ tools/ # β Agent-specific tools
β β βββ langgraph.json # β Deployment config
β βββ sample-deep-agent/ # π¬ Deep research pattern (VFS + think tool)
β βββ src/sample_deep_agent/
β β βββ graph.py # β Deep agent with research workflow
β β βββ subagents.py # β Research & critique experts
β β βββ prompts.py # β Structured TODO planning prompts
β βββ langgraph.json # β Deployment config
βββ pyproject.toml # Root dependencies
βββ Makefile # Development commands
βββ PUBLISHING.md # PyPI publishing guide
βββ .github/workflows/ # CI/CD pipeline
Automatic provider registration with fallback support:
from langgraph_up_devkits import load_chat_model
# Anthropic via OpenRouter (preferred)
model = load_chat_model("openrouter:anthropic/claude-sonnet-4")
# Qwen models (PRC/International regions)
model = load_chat_model("qwen:qwen-flash")
# SiliconFlow models
model = load_chat_model("siliconflow:Qwen/Qwen3-8B")
# With configuration
model = load_chat_model(
"openrouter:anthropic/claude-sonnet-4",
temperature=0.7,
max_tokens=1000
)from sample_agent.subagents import math, research
from sample_agent.tools import create_handoff_tool
# Create specialized agents
math_agent = math.make_graph()
research_agent = research.make_graph()
# Enable handoffs between agents
math_to_research = create_handoff_tool("research_expert")
research_to_math = create_handoff_tool("math_expert")Built-in middleware for dynamic model switching, state management, and behavior modification using the LangChain v1.0 middleware pattern:
from langchain.agents import create_agent
from langgraph_up_devkits.middleware import (
ModelProviderMiddleware,
FileSystemMaskMiddleware,
)
from langgraph_up_devkits import load_chat_model
# Model provider middleware for automatic switching
model_middleware = ModelProviderMiddleware()
# File system middleware to mask large file content from LLM context
fs_middleware = FileSystemMaskMiddleware()
agent = create_agent(
model=load_chat_model("openrouter:gpt-4o"), # Fallback model
tools=[web_search, deep_web_search],
middleware=[model_middleware, fs_middleware]
)
# Context specifies different model - middleware switches automatically
context = {"model": "siliconflow:Qwen/Qwen3-8B"}
result = await agent.ainvoke(messages, context=context)Available Middleware (v1.0 Compatible):
ModelProviderMiddleware- Dynamic model switching based on contextFileSystemMaskMiddleware- Masks virtual file systems from LLM to save tokensSummarizationMiddleware- Automatic message summarization for long conversations
Key Changes in v1.0:
- Migrated to LangChain v1.0 middleware pattern with
before_model()andafter_model()hooks - Compatible with
langchain.agents.create_agentmiddleware system - Improved state management and model switching reliability
For detailed documentation on additional features like middleware, tools, and utilities, see:
- Framework Documentation:
libs/langgraph-up-devkits/README.md - Agent Examples:
apps/sample-agent/README.md
See the Makefile for complete command reference.
# Testing
make test # Run all tests
make test_libs # Test libraries only
make test_apps # Test applications only
make unit sample-agent # Test specific app
# Code Quality
make lint # Run linters (ruff + mypy)
make format # Format code
# Development
make dev sample-agent # Start dev server with browser
make dev sample-agent -- --no-browser # Start without browser
make dev sample-agent -- --host 0.0.0.0 --port 3000 # Custom host/port
# Publishing (langgraph-up-devkits)
make build_devkits # Build distribution packages
make check_devkits # Validate package
make release_test_devkits # Build and publish to Test PyPI
make release_devkits # Build and publish to PyPISee PUBLISHING.md for detailed publishing guide.
libs/- Reusable packages shared across agentsapps/- Individual agent implementations- Shared dependencies - Managed in root
pyproject.toml - Agent-specific deps - In app-level
pyproject.toml
# Copy sample agent structure
cp -r apps/sample-agent apps/my-agent
# Update configuration
# Edit apps/my-agent/langgraph.json
# Edit apps/my-agent/pyproject.toml
# Implement apps/my-agent/src/my_agent/graph.py# Run all tests (126+ tests in libs, 10+ in apps)
make test
# Run tests for specific components
make test_libs # Test libraries only
make test_apps # Test applications only
make unit sample-agent # Test specific appCommon issues and detailed troubleshooting guides are available in:
- Setup Issues:
libs/langgraph-up-devkits/README.md#troubleshooting - Agent Issues:
apps/sample-agent/README.md#troubleshooting
git clone https://github.com/your-org/langgraph-up-monorepo.git
cd langgraph-up-monorepo
uv sync
make lint # Ensure code quality
make test # Run test suite- Type Safety - Strict mypy checking enabled
- Code Style - Ruff formatting and linting
- Testing - High test coverage required
- Documentation - Comprehensive docstrings
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature) - Ensure tests pass (
make test) - Ensure linting passes (
make lint) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- UV - Fast Python package management
- langchain-dev-utils - Development utilities for LangChain
- OpenRouter - Multi-provider model access
Built with β€οΈ for the LangGraph community
Ready to build production-grade agents? Get started β