"The True is the whole." — G.W.F. Hegel
Hegelion applies dialectical reasoning to LLMs: forcing models to argue with themselves before reaching conclusions. This produces better reasoning for questions and better code for implementations.
Hegelion is prompt-driven by default: it generates prompts for your editor or LLM to execute, with no API keys required. Use it via MCP in editors like Claude Desktop/Cursor/VS Code, or via the Python API in your own agents.
| Mode | Pattern | Use Case |
|---|---|---|
| Dialectical Reasoning | Thesis → Antithesis → Synthesis | Deep analysis of questions, philosophy, strategy |
| Autocoding | Player → Coach → Iterate | Verified code implementations with independent review |
Both modes use the same principle: force the model to oppose itself before concluding. This catches blind spots that single-pass approaches miss.
New in v0.4.0 — Based on Block AI's g3 agent research.
Single-agent coding tools often:
- Declare success prematurely ("I have successfully implemented all requirements!")
- Accumulate context pollution over long sessions
- Miss edge cases because they verify their own work
Two roles iterate until requirements are verified:
REQUIREMENTS (Source of Truth)
│
▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ PLAYER │────▶│ COACH │────▶│ ADVANCE │
│ Implements │ │ Verifies │ │ State │
│ code & tests │ │ independently │ │ │
└───────────────┘ └───────────────┘ └───────┬───────┘
▲ │
│ ┌───────────┐ │
└──────────────│ APPROVED? │◀───────────────┘
└───────────┘
│ │
No Yes
│ │
▼ ▼
Continue Done
Player: Implements requirements, writes tests, responds to feedback. Does NOT declare success.
Coach: Independently verifies each requirement, ignores player's self-assessment, outputs structured checklist.
"Discard the player's self-report of success. Have the coach perform independent evaluation."
The coach catches issues by re-reading requirements and actually running tests—not by trusting what the player says it did.
In Claude Code, Cursor, or any MCP-enabled editor:
Tip: if your editor exposes MCP tools as slash commands, use /hegelion (tool hegelion)
as the branded autocoding entrypoint.
You: Use autocoding_init with these requirements:
- Add user authentication to src/api.py
- Add tests in tests/test_auth.py
- All tests must pass
[Session initializes]
You: Generate player_prompt and implement
[Player writes code and tests]
You: Generate coach_prompt and verify
[Coach: ✓ auth endpoint exists, ✗ missing password validation test]
You: Call autocoding_advance and continue
[Loop until COACH APPROVED]
State passing note: player_prompt returns a player prompt plus an updated state advanced to phase: "coach" for the next call. For clarity, prompt outputs include current_phase (prompt phase) and next_phase (returned state's phase). All structured outputs include schema_version for client stability.
| Tool | Purpose |
|---|---|
hegelion |
Brand-first autocoding entrypoint (mode: init, workflow, single_shot) |
autocoding_init |
Start session with requirements checklist |
player_prompt |
Generate implementation prompt |
coach_prompt |
Generate verification prompt |
autocoding_advance |
Update state after coach review |
autocoding_single_shot |
Combined prompt for simpler tasks |
autocoding_save / autocoding_load |
Persist and resume sessions |
This repo includes a Codex skill at skills/hegelion/SKILL.md. Install it with your skill
installer (for example, install-skill-from-github.py --repo Hmbown/Hegelion --path skills/hegelion).
It mirrors the /hegelion command and uses MCP tools when available.
| Problem | Single Agent | Coach-Player |
|---|---|---|
| Anchoring | Drifts from requirements | Requirements anchor every turn |
| Verification | Self-assessment (unreliable) | Independent verification |
| Context | Accumulates pollution | Fresh context each turn |
| Completion | Open-ended | Explicit approval gates |
For questions requiring deep analysis, Hegelion forces three separate LLM calls:
[Call 1] Thesis → LLM commits to a position
[Call 2] Antithesis → LLM attacks that position (separate call, no hedging)
[Call 3] Synthesis → LLM reconciles the opposition
| Method | Calls | Result |
|---|---|---|
| Raw | 1 | "It depends on definitions..." |
| Enhanced | 1 | "Hold both views in tension..." |
| Hegelion | 3 | Novel framework with testable predictions |
When the model must commit to a thesis, then genuinely attack it in a separate call, the synthesis surfaces insights that single-call approaches shortcut.
Example: "Is free will compatible with determinism?"
Hegelion synthesis (after thesis and antithesis):
The deadlock dissolves when we recognize free will exists on a spectrum of self-authorship:
- Minimal freedom: Acting on desires without external coercion
- Reflective freedom: Second-order endorsement—I want to want this
- Narrative freedom: Acting consistently with a coherent life narrative
- Constitutive freedom: Recursive self-modification through deliberate habituation
Research proposal: Use fMRI to scan participants under (1) snap judgments, (2) brief reflection, (3) extended deliberation. Hypothesis: Condition (3) shows strongest correlation with self-reported decision "ownership."
This 4-level framework emerged from actually arguing with itself—not from asking for "thesis/antithesis/synthesis" in one prompt.
pip install hegelion
# MCP setup (auto-detects OS)
hegelion-setup-mcp --host claude-desktop
# MCP setup for Claude Desktop (macOS)
hegelion-setup-mcp --write "$HOME/Library/Application Support/Claude/claude_desktop_config.json"Or use the prompt-driven Python API (you run the prompt with your LLM of choice):
from hegelion.core.prompt_dialectic import create_single_shot_dialectic_prompt
prompt = create_single_shot_dialectic_prompt(
"Is AI conscious?",
use_council=True,
response_style="sections",
)
print(prompt)Health check (lists tools + generates a sample prompt):
hegelion-server --self-test| Option | Description |
|---|---|
use_council |
Three critics: Logician, Empiricist, Ethicist |
use_judge |
Final quality evaluation |
use_search |
Grounds arguments with web search |
response_style |
sections, json, or synthesis_only |
pip install hegelionFor MCP integration (works with any MCP-enabled editor):
# Shortcuts
hegelion-setup-mcp --host claude-desktop
hegelion-setup-mcp --host cursor
hegelion-setup-mcp --host vscode
hegelion-setup-mcp --host windsurf
# Claude Desktop (macOS)
hegelion-setup-mcp --write "$HOME/Library/Application Support/Claude/claude_desktop_config.json"
# Claude Desktop (Windows)
hegelion-setup-mcp --write "%APPDATA%\\Claude\\claude_desktop_config.json"
# Claude Desktop (Linux)
hegelion-setup-mcp --write "$HOME/.config/Claude/claude_desktop_config.json"
# Then restart your MCP hostClaude Code (no MCP setup needed):
# Use /hegelion command directly in any Hegelion repo clone
# Or add MCP server for tool access
claude mcp add hegelion python -- -m hegelion.mcp.serverManual config (any MCP host):
{
"mcpServers": {
"hegelion": {
"command": "python",
"args": ["-m", "hegelion.mcp.server"]
}
}
}If you run from source (not site-packages), set PYTHONPATH to the repo root. The hegelion-setup-mcp command writes this automatically.
If your host expects a full command path, use python -m hegelion.mcp.server instead of hegelion-server.
Supported editors: Claude Desktop, Claude Code, Cursor, VS Code + GitHub Copilot, Windsurf, Google Antigravity, Gemini CLI
See MCP Integration Guide for setup instructions.
- MCP Integration — Setup for Claude Desktop, Cursor, VS Code + Copilot, Windsurf, Antigravity, Gemini CLI
- Python API — Prompt-driven API reference
- CLI Reference — MCP server and setup commands
- Configuration — Backends and feature toggles
- Technical Specification — Output schemas, phase specs
Issues and PRs welcome. For significant changes, open a discussion first.
- MCP refactor: Split tooling, handlers, and validation to make the server easier to extend and maintain
- Codex skill: Added
skills/hegelion/SKILL.mdfor the/hegelionworkflow
- Unified
/hegelioncommand: Single entry point for dialectical and autocoding workflows - Host shortcuts:
hegelion-setup-mcp --host claude-desktop|cursor|vscode|windsurf - Code quality: Formatted with black, ruff checks passing
- Schema versioning: All structured outputs include
schema_versionfor client stability - Phase clarity:
player_promptandcoach_promptnow includecurrent_phaseandnext_phasefields - Improved error handling: Invalid phase transitions return clear errors with
expected,received, andhintfields - State validation: Malformed state inputs return structured error responses instead of exceptions
- Autocoding system: Player-coach dialectical loop based on Block AI's g3 agent
- MCP tools:
autocoding_init,player_prompt,coach_prompt,autocoding_advance - Session persistence with
autocoding_save/autocoding_load - Single-shot mode for simpler use cases
- CLI streaming with
--streamflag - MCP progress notifications
- 470+ tests passing
License: MIT