Supervisor system for Claude Code that intercepts and reviews tool actions using an OpenRouter LLM.
This library adds an automated permission layer to Claude Code (Agent SDK). A supervisor LLM reviews every tool call before execution and can automatically answer questions, creating a fully autonomous coding agent with configurable safety policies.
- Automated Permission Management: PreToolUse hooks intercept and review all tool calls
- Auto-Answering: Supervisor automatically answers
AskUserQuestionprompts - OpenRouter Integration: Use any LLM as supervisor (Grok, Claude, GPT-4, etc.)
- Customizable Policies: Define supervision behavior via system prompts
- Zero Manual Intervention: Fully autonomous after configuration
pip install dangerously-prompt-permissionsGet API keys from OpenRouter and Anthropic, then set environment variables:
export OPENROUTER_API_KEY='your-openrouter-key'
export ANTHROPIC_API_KEY='your-anthropic-key'Then use the library:
import asyncio
from dangerously_prompt_permissions import OpenRouterManager
async def main():
# Uses default models: Grok (supervisor) + Claude Haiku (worker)
manager = OpenRouterManager(
manager_policy="""You are a security-focused code supervisor.
Review all tool calls and:
- ALLOW safe file reads and harmless operations
- DENY destructive operations without explicit user consent
- DENY commands that could expose secrets
- For AskUserQuestion, choose the most secure default
"""
)
await manager.run(
code_prompt="Create a Python script that prints 'Hello World'",
root_dir="/path/to/workspace"
)
if __name__ == "__main__":
asyncio.run(main())- Worker makes tool call → Edit file, run bash command, etc.
- PreToolUse hook intercepts → Before execution
- Supervisor LLM reviews → Analyzes intent and safety
- Decision applied → ALLOW (execute), DENY (block), or auto-answer
- Execution continues → Worker receives result or error
For AskUserQuestion calls, the supervisor automatically selects appropriate answers based on the policy, enabling fully autonomous operation.
OpenRouterManager(
manager_model: str = "x-ai/grok-code-fast-1", # Supervisor model
manager_policy: str = "",
openrouter_url: str = "https://openrouter.ai/api/v1/chat/completions",
verbose: bool = False
)Default Models:
- Supervisor (Manager):
x-ai/grok-code-fast-1- Fast, cost-effective code review - Worker (Agent):
claude-haiku-4-5- Fast Claude model for code execution
Environment Variables (Required):
OPENROUTER_API_KEY- Get at https://openrouter.ai/ANTHROPIC_API_KEY- Get at https://console.anthropic.com/
await manager.run(
code_prompt: str, # Task description for worker
root_dir: str, # Workspace directory
model: str = "claude-haiku-4-5", # Worker model
system_prompt: str = None, # Custom worker instructions
permission_mode: str = "default",
setting_sources: list = ["project"]
)Important: The supervisor LLM is not infallible. This is a research/development tool, not a production security solution.
- Supervisor can make mistakes or be misled by clever prompts
- Always review automated decisions in sensitive contexts
- Use restrictive policies by default
- Monitor logs for unexpected behavior
- Test policies thoroughly before autonomous operation
MIT License - see LICENSE for details.
- Built on Claude Code (Agent SDK) by Anthropic
- Uses OpenRouter for LLM access