Skip to content

ashish-bmistry/langgraph-multi-step-agent-system

Repository files navigation

LangGraph Agent System

A sophisticated agentic system built with LangGraph that demonstrates advanced routing, execution planning, and reflection-based improvements.

Overview

This POC (Proof of Concept) showcases a multi-layered agent architecture that:

  • Intelligently Routes queries to different processing paths (direct LLM, simple ReAct, or complex planning)
  • Executes Plans with tool integration and step-by-step execution tracking
  • Reflects & Improves through self-reflection mechanisms for better results
  • Persists State using PostgreSQL checkpoint storage for conversation continuity

Architecture

Core Components

  • Classifier Node: Determines query complexity and routes accordingly

    • direct_llm: Simple questions answered directly
    • simple_react: React pattern for straightforward tool use
    • complex_planning: Multi-step planning with reflection
  • Planning System: For complex queries

    • Planner: Creates action plans
    • Executor: Executes planned steps with tool integration
    • Reflection: Analyzes results and prevents infinite loops
  • Tool Integration: Extensible tool system with built-in tools

    • add_numbers: Basic arithmetic
    • calculate_fuel_efficiency: Domain-specific calculation
  • State Management: Dual-state architecture

    • ConversationState: Long-lived chat history
    • ExecutionState: Per-run execution context

State Persistence

Uses PostgreSQL with LangGraph's checkpoint system for:

  • Thread-based conversation persistence
  • State recovery across sessions
  • Complete execution history

Installation

Prerequisites

  • Python 3.10+
  • PostgreSQL database (for checkpointing)
  • Google API Key (for Gemini LLM)

Setup

  1. Clone the repository

    git clone <repository-url>
    cd langgraph-agent-system
  2. Create a virtual environment

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  3. Install dependencies

    pip install -r requirements.txt
  4. Configure environment variables

    cp .env.example .env

    Edit .env with your actual credentials:

    # Required: Google API key for Gemini LLM
    GOOGLE_API_KEY=your_actual_api_key_here
    
    # Required: PostgreSQL credentials
    DB_PASSWORD=your_secure_database_password
    DB_USER=admin
    DB_HOST=localhost
    DB_PORT=5432
    DB_NAME=langgraph_db
    

    ⚠️ Important: Never commit .env file to version control (already in .gitignore)

  5. Set up PostgreSQL

    # Create database
    createdb langgraph_db
    
    # Test connection
    psql -U admin -d langgraph_db -c "SELECT 1;"

Configuration

Environment Variables (Required)

All sensitive data must be configured via environment variables in .env file:

Variable Required Description Example
GOOGLE_API_KEY Yes Google API key for Gemini LLM AIzaSy...
DB_PASSWORD Yes PostgreSQL admin password Your secure password
DB_USER No PostgreSQL user (default: admin) admin
DB_HOST No Database host (default: localhost) localhost
DB_PORT No Database port (default: 5432) 5432
DB_NAME No Database name (default: langgraph_db) langgraph_db

See .env.example for full reference with comments.

Project Structure

langgraph-agent-system/
├── main.py                 # Entry point demonstrating usage
├── db.py                   # PostgreSQL checkpointer setup
├── agent/
│   ├── __init__.py        # Package exports
│   ├── config.py          # LLM configuration
│   ├── state.py           # State definitions (TypedDict)
│   ├── graph.py           # Graph construction and routing
│   ├── nodes.py           # Node implementations
│   ├── router.py          # Conditional routing logic
│   └── tools.py           # Tool definitions
├── requirements.txt       # Python dependencies
├── README.md             # This file
└── .env.example          # Environment variables template

Usage

Running the Example

python main.py

This executes an example query demonstrating:

  • Query classification
  • Plan generation
  • Tool execution
  • Reflection and refinement
  • Result aggregation

Example Output

The system outputs:

  • Message Trace: Full conversation history
  • Structured Memory: Step results from execution
  • Tool Results: Outputs from executed tools
  • Final Response: Agent's final answer

Design Highlights

1. Intelligent Routing

  • Queries are classified before processing
  • Simple queries bypass expensive planning
  • Complex queries get dedicated planning and reflection

2. Multi-Step Execution

  • Plans broken into tasks with indices
  • Context maintained across steps
  • Tool results accumulated in state

3. Reflection Loop

  • Self-reflection on execution quality
  • Learns from tool results
  • Prevents infinite loops with attempt counting

4. State Persistence

  • Conversation state (messages) - append-only
  • Execution state (plans, results) - reset per run
  • Thread-based session management
  • PostgreSQL checkpoint storage

Key Files

agent/state.py

Defines the TypedDict schemas for:

  • ConversationState: Manages chat history with append-only semantics
  • ExecutionState: Tracks execution progress, plans, and tool results
  • AgentState: Combined state for the graph

agent/graph.py

Constructs the LangGraph StateGraph with:

  • Node registrations
  • Entry point configuration
  • Conditional routing edges

agent/nodes.py

Implements all processing nodes:

  • Classifier, routers, planners, executors
  • Tool invocation logic
  • Reflection mechanisms

agent/router.py

Conditional routing functions:

  • Path selection based on query classification
  • Tool usage decisions
  • Reflection triggers

Development Notes

  • API Key: Currently hardcoded in config.py - move to environment variable for production
  • Database: PostgreSQL dependency for checkpointing - can be replaced with file-based storage for simple demos
  • LLM: Uses Gemini 2.5 Flash - can be swapped for other LangChain-compatible models
  • Tool Execution: Synchronous execution - can be extended to async

Future Enhancements

  • Async tool execution
  • Additional domain-specific tools
  • Web interface for testing
  • Metrics and performance tracking
  • Multi-agent collaboration
  • RAG integration for knowledge-based queries

License

This project is licensed under the MIT License - see the LICENSE file for details.

Contact

For questions, issues, or contributions, please open an issue on the repository or contact the project maintainers.

About

A sophisticated AI agent system built with LangGraph, showcasing advanced routing, planning-execution workflows, tool orchestration, reflection-based self-improvement, and durable state persistence.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages