"Without consultation, plans are frustrated, but with many counselors they succeed." — Proverbs 15:22 (LSB)
A Model Context Protocol (MCP) server providing unified access to Claude Code, Codex, and Gemini CLIs with session management, retry logic, and async job orchestration.
- Multi-LLM Orchestration: Unified interface for Claude Code, Codex, and Gemini CLIs
- Session Management: Track and resume conversations across all CLIs with persistent storage
- Token Optimization: Automatic 44% reduction on prompts, 37% on responses (opt-in)
- Correlation ID Tracking: Full request tracing across all LLM interactions
- Cross-Tool Collaboration: LLMs can use each other via MCP (validated through dogfooding)
- SQLite Flight Recorder: Every request/response logged to
~/.llm-cli-gateway/logs.dbwith correlation IDs, token usage, duration, retry counts, and circuit breaker state. Browse with Datasette:datasette ~/.llm-cli-gateway/logs.db - Structured Metadata: Tool responses include machine-readable
structuredContent(model, cli, correlationId, sessionId, durationMs, token counts)
- Retry Logic: Exponential backoff with circuit breaker for transient failures
- Atomic File Writes: Process-specific temp files with fsync for data integrity
- Memory Limits: 50MB cap on CLI output prevents DoS attacks
- NVM Path Caching: Eliminates I/O overhead on every request
- Long-Running Jobs: Non-time-bound async execution via
*_request_async+ polling tools
- Comprehensive Testing: 284 tests covering unit, integration, and regression scenarios
- Input Validation: Zod schemas prevent injection attacks
- No Secret Leakage: Generic session descriptions only (file permissions 0o600)
- No ReDoS: Bounded regex patterns prevent catastrophic backtracking
- Type Safety: Strict TypeScript with comprehensive error handling
- 221 Tests: Unit, integration, and regression tests with real CLI execution
Before using this gateway, you need to install the CLI tools you want to use:
# Installation instructions for Claude Code
# Visit: https://docs.anthropic.com/claude-code
npm install -g @anthropic-ai/claude-codenpm install -g @openai/codex
codex loginnpm install -g @google/gemini-cli
# Or: https://github.com/google-gemini/gemini-clinpm install -g llm-cli-gatewayOr use directly with npx:
{
"mcpServers": {
"llm-gateway": {
"command": "npx",
"args": ["-y", "llm-cli-gateway"]
}
}
}git clone https://github.com/verivus-oss/llm-cli-gateway.git
cd llm-cli-gateway
npm install
npm run buildAdd to your MCP client configuration (e.g., Claude Desktop):
{
"mcpServers": {
"llm-cli-gateway": {
"command": "node",
"args": ["/path/to/llm-cli-gateway/dist/index.js"]
}
}
}Execute a Claude Code request with optional session management.
Parameters:
prompt(string, required): The prompt to send (1-100,000 chars)model(string, optional): Model name or alias (uselist_modelsfor available values; supportslatest)outputFormat(string, optional): Output format ("text" or "json"), default: "text"sessionId(string, optional): Specific session ID to usecontinueSession(boolean, optional): Continue the active sessioncreateNewSession(boolean, optional): Always create a new sessionallowedTools(string[], optional): Restrict Claude tools to this allow-listdisallowedTools(string[], optional): Explicitly deny listed Claude toolsdangerouslySkipPermissions(boolean, optional): Request CLI-side permission bypass (legacy mode only)approvalStrategy(string, optional):"legacy"(default) or"mcp_managed"approvalPolicy(string, optional):"strict","balanced", or"permissive"mcpServers(string[], optional): Claude MCP servers to expose (default:["sqry","exa","ref_tools"];"trstr"available as opt-in)strictMcpConfig(boolean, optional): Require Claude to use only supplied MCP config, default: true (request fails if any requested server is unavailable)optimizePrompt(boolean, optional): Optimize prompt for token efficiency (44% reduction), default: falseoptimizeResponse(boolean, optional): Optimize response for token efficiency (37% reduction), default: falsecorrelationId(string, optional): Request trace ID (auto-generated if omitted)
Response extras:
approval: Approval decision record whenapprovalStrategy="mcp_managed"mcpServers: Requested/enabled/missing MCP servers for this call
Example:
{
"prompt": "Write a Python function to calculate fibonacci numbers",
"model": "sonnet",
"continueSession": true,
"optimizePrompt": true,
"optimizeResponse": true
}Execute a Codex request with optional session tracking.
Parameters:
prompt(string, required): The prompt to send (1-100,000 chars)model(string, optional): Model name or alias (uselist_modelsfor available values; supportslatest, recommended:gpt-5.4)fullAuto(boolean, optional): Enable full-auto mode, default: falsedangerouslyBypassApprovalsAndSandbox(boolean, optional): Request Codex bypass flagsapprovalStrategy(string, optional):"legacy"(default) or"mcp_managed"approvalPolicy(string, optional):"strict","balanced", or"permissive"mcpServers(string[], optional): MCP servers expected for Codex execution contextsessionId(string, optional): Session identifier for trackingcreateNewSession(boolean, optional): Always create a new sessionoptimizePrompt(boolean, optional): Optimize prompt for token efficiency, default: falseoptimizeResponse(boolean, optional): Optimize response for token efficiency, default: falsecorrelationId(string, optional): Request trace ID (auto-generated if omitted)idleTimeoutMs(number, optional): Kill a stuck Codex process after output inactivity; 30,000 to 3,600,000 ms
Response extras:
approval: Approval decision record whenapprovalStrategy="mcp_managed"mcpServers: Requested MCP servers for this call
Example:
{
"prompt": "Create a REST API endpoint",
"model": "gpt-5.4",
"fullAuto": true,
"optimizePrompt": true
}Execute a Gemini CLI request with session support.
Parameters:
prompt(string, required): The prompt to send (1-100,000 chars)model(string, optional): Model name or alias (uselist_modelsfor available values; supportslatest,pro,flash)sessionId(string, optional): Session ID to resumeresumeLatest(boolean, optional): Resume the latest session automaticallycreateNewSession(boolean, optional): Always create a new sessionapprovalMode(string, optional): Gemini approval mode (default|auto_edit|yolo) in legacy modeapprovalStrategy(string, optional):"legacy"(default) or"mcp_managed"approvalPolicy(string, optional):"strict","balanced", or"permissive"mcpServers(string[], optional): Allowed Gemini MCP server namesallowedTools(string[], optional): Restrict Gemini tools to this allow-listincludeDirs(string[], optional): Additional workspace directories for GeminioptimizePrompt(boolean, optional): Optimize prompt for token efficiency, default: falseoptimizeResponse(boolean, optional): Optimize response for token efficiency, default: falsecorrelationId(string, optional): Request trace ID (auto-generated if omitted)
Response extras:
approval: Approval decision record whenapprovalStrategy="mcp_managed"mcpServers: Requested MCP servers for this call
Example:
{
"prompt": "Explain quantum computing",
"model": "latest",
"resumeLatest": true,
"optimizePrompt": true
}Start a long-running Claude or Codex request without waiting for completion in the same MCP call.
Use this flow when analysis/runtime can exceed client tool-call limits:
- Start job with
*_request_async - Poll with
llm_job_status - Fetch output with
llm_job_result - Optionally stop with
llm_job_cancel
Async request tools accept the same approval strategy fields as their sync variants:
approvalStrategy:"legacy"(default) or"mcp_managed"approvalPolicy:"strict"|"balanced"|"permissive"overridemcpServers: Requested MCP servers (sqry,exa,ref_tools,trstr)claude_request_asyncalso supportsstrictMcpConfigand fails fast when requested servers are unavailable
Return lifecycle status (running, completed, failed, canceled) and metadata for an async job.
Return captured stdout/stderr for an async job (with configurable max chars per stream).
Cancel a running async job.
List recent MCP-managed approval decisions recorded by the gateway.
Parameters:
limit(number, optional): Max records (1-500), default: 50cli(string, optional): Filter by"claude","codex", or"gemini"
Approval records are persisted to ~/.llm-cli-gateway/approvals.jsonl.
Create a new session for a specific CLI.
Parameters:
cli(string, required): CLI to create session for ("claude", "codex", "gemini")description(string, optional): Description for the sessionsetAsActive(boolean, optional): Set as active session, default: true
Example:
{
"cli": "claude",
"description": "Code review session",
"setAsActive": true
}List all sessions, optionally filtered by CLI.
Parameters:
cli(string, optional): Filter by CLI ("claude", "codex", "gemini")
Response includes:
- Total session count
- Session details (ID, CLI, description, timestamps, active status)
- Active session IDs for each CLI
Set the active session for a specific CLI.
Parameters:
cli(string, required): CLI to set active session forsessionId(string, required): Session ID to activate (or null to clear)
Retrieve details for a specific session.
Parameters:
sessionId(string, required): Session ID to retrieve
Delete a specific session.
Parameters:
sessionId(string, required): Session ID to delete
Clear all sessions, optionally for a specific CLI.
Parameters:
cli(string, optional): Clear sessions for specific CLI only
List available models for each CLI.
Parameters:
cli(string, optional): Specific CLI to list models for ("claude", "codex", "gemini")
Response includes:
- Model names and descriptions
- Best use cases for each model
- CLI-specific information
- Automatic Session Tracking: By default, the gateway automatically tracks sessions for each CLI
- Active Sessions: Each CLI can have one active session that's used by default
- Persistent Storage: Sessions are stored in
~/.llm-cli-gateway/sessions.json - Context Reuse: Using sessions maintains conversation history and context
// 1. Create a new session
await callTool("session_create", {
cli: "claude",
description: "Debugging session",
setAsActive: true
});
// 2. Make requests (automatically uses active session)
await callTool("claude_request", {
prompt: "What's the bug in this code?",
// sessionId is automatically used
});
// 3. Continue the conversation
await callTool("claude_request", {
prompt: "Can you explain that fix in more detail?",
continueSession: true
});
// 4. List all sessions
await callTool("session_list", { cli: "claude" });
// 5. Switch to a different session
await callTool("session_set_active", {
cli: "claude",
sessionId: "some-other-session-id"
});
// 6. Delete when done
await callTool("session_delete", {
sessionId: "session-id-to-delete"
});DEBUG: Enable debug logging (set to any value)DEBUG=1 node dist/index.js
LLM_GATEWAY_APPROVAL_POLICY: Default approval policy when request does not passapprovalPolicy(strict,balanced,permissive)LLM_GATEWAY_APPROVAL_POLICY=strict node dist/index.js
LLM_GATEWAY_LOGS_DB: Path to SQLite flight recorder database. Default:~/.llm-cli-gateway/logs.db. Set to empty string ornoneto disable logging.# Custom path LLM_GATEWAY_LOGS_DB=/var/log/gateway/logs.db node dist/index.js # Disable flight recorder LLM_GATEWAY_LOGS_DB=none node dist/index.js
Each CLI can be configured through its own configuration files:
- Claude Code:
~/.claude/config.json - Codex:
~/.codex/config.toml - Gemini:
~/.gemini/config.json
Simon's llm tool made it trivially easy to talk to any LLM from the command line. But as AI-assisted development matures, the challenge shifts from "how do I call a model" to "how do I orchestrate multiple models reliably, and what did they actually do?"
Multiple models increase the confidence factor. When Claude writes code, Codex reviews it, and Gemini checks for bugs -- each bringing different training data and reasoning patterns -- the result is more robust than any single model alone. And often this isn't even enough. Having the models do iterative reviews is where you start getting real confidence.
Every interaction should be queryable data. Inspired by llm's SQLite logging philosophy, the gateway records every request and response to a local SQLite database. Not just prompts and responses -- retry counts, circuit breaker states, approval decisions, thinking blocks, cost estimates. Open it with Datasette and you have a complete operational picture of your AI usage:
datasette ~/.llm-cli-gateway/logs.db
The llm-gateway plugin bridges both worlds. Install it, and your existing llm workflows gain orchestration features without changing how you work:
llm install llm-gateway
llm -m gateway-claude "explain this function"
Your gateway interactions appear in both llm logs (for your personal history) and the gateway's flight recorder (for operational observability). Two audiences, one workflow.
Composability over monoliths. The gateway doesn't replace llm -- it complements it. Use llm directly when you want simplicity. Route through the gateway when you want resilience, multi-model coordination, or detailed operational telemetry. The plugin is the bridge, not the destination.
llm-cli-gateway/
├── src/
│ ├── index.ts # Main MCP server and tool definitions
│ ├── executor.ts # CLI execution with timeout support
│ ├── session-manager.ts # Session management logic
│ └── __tests__/
│ ├── executor.test.ts # Unit tests for executor
│ └── integration.test.ts # Integration tests
├── dist/ # Compiled JavaScript
├── package.json
├── tsconfig.json
└── vitest.config.ts
# Run all tests
npm test
# Run unit tests only
npm run test:unit
# Run integration tests only
npm run test:integration
# Watch mode
npm run test:watchnpm run buildnpm startThe gateway provides detailed error messages for common issues:
Error executing claude CLI:
spawn claude ENOENT
The 'claude' command was not found. Please ensure claude CLI is installed and in your PATH.
Error executing codex CLI: Command timed out
Process timed out after 120000ms
Prompt cannot be empty
Prompt too long (max 100k chars)
Logs are written to stderr (stdout is reserved for MCP protocol):
[INFO] 2026-01-24T05:00:00.000Z - Starting llm-cli-gateway MCP server
[INFO] 2026-01-24T05:00:01.000Z - claude_request invoked with model=sonnet, prompt length=150
[INFO] 2026-01-24T05:00:05.000Z - claude_request completed successfully in 4523ms, response length=2048
[ERROR] 2026-01-24T05:00:10.000Z - codex CLI execution failed: spawn codex ENOENT
Enable debug logging:
DEBUG=1 node dist/index.jsMake sure the CLIs are installed and in your PATH:
which claude
which codex
which geminiThe gateway extends PATH to include common locations:
~/.local/bin/usr/local/bin/usr/bin- All
~/.nvm/versions/node/*/bindirectories
If you encounter permission errors, ensure the CLI tools have proper permissions:
chmod +x $(which claude)
chmod +x $(which codex)
chmod +x $(which gemini)Sessions are stored in ~/.llm-cli-gateway/sessions.json. If you encounter issues:
- Check file permissions:
ls -la ~/.llm-cli-gateway/- Reset sessions:
rm ~/.llm-cli-gateway/sessions.json- Or manually edit the session file:
cat ~/.llm-cli-gateway/sessions.jsonThe gateway does not enforce a default execution timeout for LLM CLI requests.
If your MCP client/runtime enforces per-tool-call deadlines, use async tools (*_request_async + llm_job_status/llm_job_result) so long-running jobs can complete outside a single call window.
The gateway supports concurrent requests across different CLIs. Each request spawns a separate process.
- Input Validation: All prompts are validated (min 1 char, max 100k chars)
- Command Execution: Uses
spawnwith separate arguments (not shell execution) - No Eval: No dynamic code evaluation
- Sandboxing: Consider running in containers for production use
- Fork the repository
- Create a feature branch
- Make your changes
- Run tests:
npm test - Build:
npm run build - Submit a pull request
MIT. See LICENSE for details.
For issues and questions:
- Open an issue on GitHub
- Check existing issues and documentation
- Review CLI-specific documentation for CLI-related problems
See CHANGELOG.md for detailed release history.