DebateForge is an advanced AI-powered debate orchestration platform that generates intelligent personas and facilitates structured dialectical debates between AI agents on complex topics. It combines web research capabilities with multi-round debate mechanics to produce comprehensive, evidence-based arguments and strategic decision matrices.
The platform uses LangGraph for orchestration, LangChain for AI interactions, and Chroma for vector-based document retrieval to create a sophisticated debate workflow that explores both sides of controversial topics.
- Automatically generates two opposing AI personas based on the debate topic
- Creates persona-specific system prompts with:
- Core belief statements
- Argumentation styles (data-driven, evidence-based)
- Debate tactics and win conditions
- Tool usage strategies for research support
- Configurable number of debate rounds
- Clear role assignment (Proponent vs. Critic/Skeptic)
- Progressive rebuttal structure:
- Round 1: Opening statements with core evidence
- Rounds 2-N: Targeted rebuttals and counter-arguments
- Final Round: Closing arguments and summaries
- Automatic Query Refinement: Breaks user queries into 5 focused sub-queries
- Multi-Source Search: Integrates DuckDuckGo web search
- Vector Store Retrieval: Chroma-based document embedding and retrieval (MMR search)
- Context Management: Intelligent summarization to prevent token bloat
- Lead Moderator: Enforces debate quality and prevents vague claims
- Fact-Checking: Validates arguments using research tools
- Flow Control: Manages speaker turns and round progression
- Constraint Enforcement: Prevents argument repetition and ensures substantive rebuttals
- Round Digests: Mermaid.js flowcharts mapping argument flows per round
- Strategic Decision Matrix: Comprehensive final report with:
- Executive summary
- Agent-specific arguments and positions
- Consensus points and unique perspectives
- Round-by-round scorecard
- Cited sources and references
- Real-time Streaming: Live UI updates via Gradio
┌─────────────────────────────────────┐
│ Research Graph │
├─────────────────────────────────────┤
│ • Query Refiner │
│ • Web Search (Multi-source) │
│ • Data Extraction & Analysis │
│ • Vector Store Creation │
└──────────────┬──────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ Forum Graph (Main Debate Loop) │
├─────────────────────────────────────┤
│ 1. Persona Creator │
│ 2. Debate Supervisor (Moderator) │
│ 3. Persona Agent (Argument Engine) │
│ 4. Round Digest (Visualization) │
│ 5. Report Generator (Final Output) │
└──────────────┬──────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ Final Graph & Report Rendering │
└─────────────────────────────────────┘
| Component | File | Purpose |
|---|---|---|
| Research Graph | research_graph.py |
Web search & document embedding |
| Forum Graph | forum_graph.py |
Debate orchestration & persona management |
| Final Graph | final_graph.py |
Report generation & visualization |
| State Definitions | graph_states.py |
Pydantic models for all graph states |
| Web UI | main.py |
Gradio interface for user interaction |
- Python 3.10+
- OpenAI API key (GPT-4.5-nano with extended thinking)
- Environment variables configured (
.envfile)
# Install dependencies using uv
uv sync
# Or using pip
pip install -r requirements.txtCreate a .env file in the project root:
OPENAI_API_KEY=your-api-key-here# Start the Gradio web interface
python main.pyThen navigate to http://localhost:7860 in your browser.
import asyncio
from forum_graph import create_forum_graph
from graph_states import ForumState
async def main():
forum_graph = await create_forum_graph()
initial_state = ForumState(
query="Is AI investment in 2024 a bubble or undervalued opportunity?",
max_rounds=3,
current_round=1,
vector_store="path/to/vector/store",
messages=[],
# ... other state fields
)
config = {"configurable": {"thread_id": "unique-thread-id"}}
async for update in forum_graph.astream(initial_state, config=config):
print(update)
asyncio.run(main())- Expands user query into 5 sub-queries covering different angles
- Searches multiple sources simultaneously
- Extracts and validates information
- Builds vector store with embedded documents
- Analyzes query to identify opposing perspectives
- Generates specialized system prompts for each persona
- Configures persona strategies and debate tactics
- Supervisor rounds: Moderator gives instructions to next speaker
- Agent rounds: Persona argues their position using research tools
- Summarization: Compress conversation to prevent context overflow
- Round digest: Visualize argument flow for this round
- Synthesizes debate outcomes
- Creates Markdown report with citations
- Generates Mermaid visualizations
- Compiles source references
ForumState(
query: str # Topic/question for debate
max_rounds: int # Number of debate rounds (default: 3)
vector_store: str # Path to Chroma vector store
query_limit: int # Max refined queries for research
# ... additional tracking fields
)Models used (configured in source files):
- Research: GPT-5-nano (minimal reasoning)
- Forum: GPT-5-nano (low reasoning effort)
- Embeddings: text-embedding-3-small
- Real-time debate messages with speaker attribution
- Tool calls and data retrieval notifications
- Mermaid flowcharts per round
- Key arguments (pro/con) summary
- Round winner determination
- Executive summary
- Comprehensive argument lists per agent
- Consensus and unique perspective identification
- Source references with titles and URLs
- Token count tracking (input/output)
- Execution time
- Round progression
- Automatic summarization when approaching context limits
- Message pruning after summarization
- Efficient state management across rounds
- Fact-checking via research tool calls
- Preventing vague claims through supervisor instructions
- Enforcing substantive rebuttals (no "agree to disagree")
- fetch_data: Retrieves from vector store (RAG)
- debate_search: Real-time web search via DuckDuckGo
- ToolNode: Async tool execution within graph
agent/
├── forum_graph.py # Debate orchestration
├── research_graph.py # Web search & embedding
├── final_graph.py # Report generation
├── graph_states.py # Pydantic state models
├── main.py # Gradio UI
├── pyproject.toml # Project metadata
├── uv.lock # Dependency lock file
└── README.md # This file
- LLM: OpenAI GPT-5-nano
- Embeddings: OpenAI text-embedding-3-small
- Web Search: DuckDuckGo Search API
- Vector Store: Chroma (Open-source)
- Orchestration: LangGraph (LangChain ecosystem)
- Real-time Chat Display: Live debate transcript
- Round Counter: Visual progress tracking
- Status Messages: Real-time operation status
- Performance Metrics: Token usage and execution time
- Input Controls: Query, refinement depth, round count
-
Support for additional LLM providers (Claude, Llama)
-
Multi-language debate support
-
Data fetching from more sources
-
Debate export (PDF, JSON)
-
Interactive persona editing pre-debate
-
Debate scoring rubrics
-
Audio/video debate playback
Built with:
- LangChain - LLM orchestration
- LangGraph - State graph management
- Chroma - Vector database
- Gradio - Web UI framework
- OpenAI - Language models
For issues, questions, or contributions, please refer to the project repository.
DebateForge - Turning Complex Questions into Structured Insights through AI-Powered Dialectics

