Skip to content

sipyourdrink-ltd/bernstein

Use this GitHub action with your project
Add this Action to an existing workflow or create a new one
View on Marketplace

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2,400 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

English | Español (Spanish) | 中文 (Chinese) | العربية (Arabic) | Português (Portuguese) | Bahasa Indonesia (Indonesian) | Français (French) | 日本語 (Japanese) | Русский (Russian) | Deutsch (German) | עברית (Hebrew) | יידיש (Yiddish)

Bernstein

"To achieve great things, two things are needed: a plan and not quite enough time." — Leonard Bernstein

Orchestrate any AI coding agent. Any model. One command.

Bernstein in action: parallel AI agents orchestrated in real time

CI PyPI Python 3.12+ License MseeP.ai

Website · Documentation · Install · First run · Glossary · Limitations

As featured in
Bernstein - Listed on CodeTrendy


What is this? You tell it what you want built. It splits the work across several AI coding agents (Claude Code, Codex, Gemini CLI, and 37 more), runs the tests, and merges the code that actually passes. You come back to working code.

Forward-deployed engineering, on a swarm. Drop Bernstein into a client repo and you get a multi-agent crew with file-based state, per-agent credential scoping, and an HMAC-signed audit trail — running on whichever CLI agents the client already trusts.

Install and run

One line on macOS / Linux:

curl -fsSL https://bernstein.run/install.sh | sh

Windows (PowerShell):

irm https://bernstein.run/install.ps1 | iex

Then point it at your project and set a goal:

cd your-project
bernstein init                          # creates a .sdd/ workspace
bernstein -g "Add JWT auth with refresh tokens, tests, and API docs"

What you see while it runs:

$ bernstein -g "Add JWT auth"
[manager] decomposed into 4 tasks
[agent-1] claude-sonnet: src/auth/middleware.py  (done, 2m 14s)
[agent-2] codex:         tests/test_auth.py      (done, 1m 58s)
[verify]  all gates pass. merging to main.

Why it's different

Most agent orchestrators use an LLM to decide who does what. That's non-deterministic and burns tokens on scheduling instead of code. Bernstein does one LLM call to break down your goal, then the rest — running agents in parallel, isolating their git branches, running tests, routing retries — is plain Python. Every run is reproducible. Every step is logged and replayable.

No framework to learn. No vendor lock-in. Swap any agent, any model, any provider.

Other install options: pipx install bernstein, pip install bernstein, uv tool install bernstein, brew tap chernistry/tap && brew install bernstein, dnf copr, npx bernstein-orchestrator. See install options.

Use cases

  • Forward-deployed engineering — drop the swarm onto a client repo when you arrive, take it with you when you leave.
  • Self-evolving projects — point Bernstein at its own repo and let it execute the backlog (this codebase is one).
  • CI fleets — run a swarm of agents in parallel on PRs, with per-agent credential scoping and signed audit trail.

Supported agents

Bernstein auto-discovers installed CLI agents. Mix them in the same run. Cheap local models for boilerplate, heavier cloud models for architecture.

40 CLI agent adapters: 37 third-party wrappers, 2 leaf-node delegators (Composio, Ralphex), plus a generic wrapper for anything with --prompt.

Agent Models Install
Claude Code Opus 4, Sonnet 4.6, Haiku 4.5 npm install -g @anthropic-ai/claude-code
Codex CLI GPT-5, GPT-5 mini npm install -g @openai/codex
OpenAI Agents SDK v2 GPT-5, GPT-5 mini, o4 pip install 'bernstein[openai]'
GitHub Copilot CLI Copilot-managed (GPT-5, Sonnet 4.6) npm install -g @github/copilot
Gemini CLI Gemini 2.5 Pro, Gemini Flash npm install -g @google/gemini-cli
Cursor Sonnet 4.6, Opus 4, GPT-5 Cursor app
Aider Any OpenAI/Anthropic-compatible pip install aider-chat
Amp Amp-managed npm install -g @sourcegraph/amp
Cody Sourcegraph-hosted npm install -g @sourcegraph/cody
Continue Any OpenAI/Anthropic-compatible npm install -g @continuedev/cli (binary: cn)
Goose Any provider Goose supports See Goose docs
IaC (Terraform/Pulumi) Any provider the base agent uses Built-in
Kilo Kilo-hosted See Kilo docs
Kiro Kiro-hosted See Kiro docs
Ollama + Aider Local models (offline) brew install ollama
OpenCode Any provider OpenCode supports See OpenCode docs
Qwen Qwen Code models npm install -g @qwen-code/qwen-code
Cloudflare Agents Workers AI models bernstein cloud login
OpenHands Any LiteLLM-supported (Anthropic, OpenAI, ...) uv tool install openhands --python 3.12
Open Interpreter Any (LiteLLM-backed) pip install open-interpreter
gptme Anthropic, OpenAI, OpenRouter pipx install gptme
Plandex Plandex Cloud or self-hosted models curl -sL https://plandex.ai/install.sh | bash
AIChat OpenAI, Anthropic, OpenRouter, Groq, Gemini cargo install aichat
Letta Code Letta-routed (Anthropic, OpenAI) npm install -g @letta-ai/letta-code
Generic Any CLI with --prompt Built-in

Orchestrator delegation (leaf-node)

A separate, smaller class of adapters that wrap other CLI orchestrators as if they were single agents. Bernstein hands the wrapped tool a prompt or plan and only sees the final exit code — sub-agent costs and quality gates inside the wrapped orchestrator are not visible to Bernstein. Useful when you want to drop an existing workflow built on one of these tools into a step of a larger Bernstein plan.

Orchestrator Wrapped as Install
Composio Agent Orchestrator (@aoagents/ao) composio npm install -g @aoagents/ao
umputun/ralphex ralphex go install github.com/umputun/ralphex/cmd/ralphex@latest

Any adapter also works as the internal scheduler LLM. Run the entire stack without any specific provider:

internal_llm_provider: gemini            # or qwen, ollama, codex, goose, ...
internal_llm_model: gemini-2.5-pro

Tip

Run bernstein --headless for CI pipelines. No TUI, structured JSON output, non-zero exit on failure.

Quick start

cd your-project
bernstein init                    # creates .sdd/ workspace + bernstein.yaml
bernstein -g "Add rate limiting"  # agents spawn, work in parallel, verify, exit
bernstein live                    # watch progress in the TUI dashboard
bernstein stop                    # graceful shutdown with drain

For multi-stage projects, define a YAML plan:

bernstein run plan.yaml           # skips LLM planning, goes straight to execution
bernstein run --dry-run plan.yaml # preview tasks and estimated cost

How it works

  1. Decompose. The manager breaks your goal into tasks with roles, owned files, and completion signals.
  2. Spawn. Agents start in isolated git worktrees, one per task. Main branch stays clean.
  3. Verify. The janitor checks concrete signals: tests pass, files exist, lint clean, types correct.
  4. Merge. Verified work lands in main. Failed tasks get retried or routed to a different model.

The orchestrator is a Python scheduler, not an LLM. Scheduling decisions are deterministic, auditable, and reproducible.

Cloud execution (Cloudflare)

Bernstein can run agents on Cloudflare Workers instead of locally. The bernstein cloud CLI handles deployment and lifecycle.

  • Workers. Agent execution on Cloudflare's edge, with Durable Workflows for multi-step tasks and automatic retry.
  • V8 sandbox isolation. Each agent runs in its own isolate, no container overhead.
  • R2 workspace sync. Local worktree state syncs to R2 object storage so cloud agents see the same files.
  • Workers AI (experimental). Use Cloudflare-hosted models as the LLM provider, no external API keys required.
  • D1 analytics. Task metrics and cost data stored in D1 for querying.
  • Browser rendering. Headless Chrome on Workers for agents that need to inspect web output.
  • MCP remote transport. Expose or consume MCP servers over Cloudflare's network.
bernstein cloud login      # authenticate with Bernstein Cloud
bernstein cloud deploy     # push agent workers
bernstein cloud run plan.yaml  # execute a plan on Cloudflare

A bernstein cloud init scaffold for wrangler.toml and bindings is planned.

Capabilities

Core orchestration. Parallel execution, git worktree isolation, janitor verification, quality gates (lint, types, PII scan), cross-model code review, circuit breaker for misbehaving agents, token growth monitoring with auto-intervention.

Intelligence. Contextual bandit router for model/effort selection. Knowledge graph for codebase impact analysis. Semantic caching saves tokens on repeated patterns. Cost anomaly detection (burn-rate alerts). Behavior anomaly detection with Z-score flagging.

Sandboxing. Pluggable SandboxBackend protocol — run agents in local git worktrees (default), Docker containers, E2B Firecracker microVMs, or Modal serverless containers (with optional GPU). Plugin authors can register custom backends through the bernstein.sandbox_backends entry-point group. Inspect installed backends with bernstein agents sandbox-backends.

Artifact storage. .sdd/ state can stream to pluggable ArtifactSink backends: local filesystem (default), S3, Google Cloud Storage, Azure Blob, or Cloudflare R2. BufferedSink keeps the WAL crash-safety contract by writing locally with fsync first and mirroring to the remote asynchronously.

Skill packs. Progressive-disclosure skills (OpenAI Agents SDK pattern): only a compact skill index ships in every spawn's system prompt, agents pull full bodies via the load_skill MCP tool on demand. 17 built-in role packs plus third-party bernstein.skill_sources entry-points.

Controls. HMAC-chained audit logs, policy engine, PII output gating, WAL-backed crash recovery (experimental multi-worker safety), OAuth 2.0 PKCE. SSO/SAML/OIDC support is in progress.

Observability. Prometheus /metrics, OTel exporter presets, Grafana dashboards. Per-model cost tracking (bernstein cost). Terminal TUI and web dashboard. Agent process visibility in ps.

Ecosystem. MCP server mode, A2A protocol support, GitHub App integration, pluggy-based plugin system, multi-repo workspaces, cluster mode for distributed execution, self-evolution via --evolve (experimental).

Full feature matrix: FEATURE_MATRIX.md · Recent features: What's New

What's new in v1.9

ACP bridgebernstein acp serve --stdio exposes Bernstein to any editor that speaks the Agent Communication Protocol (Zed, etc.). No plugin code needed on the editor side.

Autonomous CI repairbernstein autofix watches open Bernstein PRs and, when CI turns red, spawns a fixer agent automatically. Once green, it pushes the fix and re-requests review.

Credential vaultbernstein connect <provider> writes API keys to the OS keychain; bernstein creds lists and rotates them. Agents inherit scoped credentials without touching environment variables.

Preview tunnelsbernstein preview start boots a sandboxed dev server and prints a public URL. Useful for sharing a running branch with a reviewer without deploying to staging.

Full changelog: docs/whats-new.md

Operator commands

Commands that eliminate the glue code most teams end up writing around their runs.

Command What it does
bernstein pr Auto-creates a GitHub PR from a completed session; body carries the janitor's gate results and token/USD cost breakdown.
bernstein from-ticket <url> Imports a Linear / GitHub Issues / Jira ticket as a Bernstein task. Label-based role + scope inference. Supports --dry-run and --run.
bernstein ticket import <url> Alias / group form of from-ticket for scripting.
bernstein remote SSH sandbox backend. remote test <host>, remote run <host> <path>, remote forget <host>. ControlMaster socket reuse for fast repeat calls.
bernstein hooks Lifecycle hooks for pre_task, post_task, pre_merge, post_merge, pre_spawn, post_spawn — shell scripts or pluggy @hookimpls. hooks list, hooks run <event>, hooks check.
bernstein chat serve --platform=telegram|discord|slack Drive runs from chat with /run, /status, /approve, /reject, /switch, /stop.
bernstein approve-tool / bernstein reject-tool Interactive mid-run tool-call approval. --latest, --id, --always.
bernstein tunnel start <port> [--provider auto|cloudflared|ngrok|bore|tailscale] One wrapper around four tunnel providers. Also tunnel list, tunnel stop <name>|--all. ControlMaster-style process reuse.
bernstein daemon install [--user|--system] [--command="..."] [--env KEY=VAL]... Installs a systemd (Linux) or launchd (macOS) unit for auto-start. Also daemon start/stop/restart/status/uninstall.
bernstein connect <provider> / bernstein creds Stores and rotates API credentials in the OS keychain. Agents inherit scoped keys per-run.
bernstein autofix Daemon that monitors open Bernstein PRs; spawns a fixer agent when CI fails and pushes the repair automatically.
bernstein preview start Starts a sandboxed dev server for the current branch and prints a shareable public tunnel URL.

Retrieval & caching: what's actually under the hood

Bernstein deliberately uses no neural embeddings, no vector databases, and no external embedding APIs. There are two retrieval/caching layers, both keyword/lexical:

  • Codebase RAG (core/knowledge/rag.py) — SQLite FTS5 with BM25 ranking and AST-aware chunking for Python files. Built incrementally on file mtime; used to enrich agent task context within token budgets.
  • Semantic cache (core/knowledge/semantic_cache.py) — despite the name, fuzzy matching is done with TF (term-frequency) cosine similarity over word counts, not learned embeddings. It deduplicates near-identical LLM planning and agent-output requests so we don't re-spawn agents for the same goal.

If you need real semantic retrieval (vector DB, neural embeddings), wire it yourself via the retrieval role/skill in templates/; nothing in core performs vector search.

How it compares

Feature Bernstein CrewAI AutoGen 1 LangGraph
Orchestrator Deterministic code LLM-driven (+ code Flows) LLM-driven Graph + LLM
Works with Any CLI agent (40 adapters) Python SDK classes Python agents LangChain nodes
Git isolation Worktrees per agent No No No
Pluggable sandboxes Worktree, Docker, E2B, Modal No No No
Verification Janitor + quality gates Guardrails + Pydantic output Termination conditions Conditional edges
Cost tracking Built-in usage_metrics RequestUsage Via LangSmith
State model File-based (.sdd/) In-memory + SQLite checkpoint In-memory Checkpointer
Remote artifact sinks S3, GCS, Azure Blob, R2 No No No
Self-evolution Built-in (experimental) No No No
Declarative plans (YAML) Yes Yes (agents.yaml, tasks.yaml) No Partial (langgraph.json)
Model routing per task Yes Per-agent LLM Per-agent model_client Per-node (manual)
MCP support Yes (client + server) Yes Yes (client + workbench) Yes (client + server)
Agent-to-agent chat Bulletin board Yes (Crew process) Yes (group chat) Yes (supervisor, swarm)
Web UI TUI + web dashboard CrewAI AMP AutoGen Studio LangGraph Studio + LangSmith
Cloud hosted option Yes (Cloudflare) Yes (CrewAI AMP) No Yes (LangGraph Cloud)
Built-in RAG/retrieval Yes (codebase FTS5 + BM25) crewai_tools autogen_ext retrievers Via LangChain

Last verified: 2026-04-19. See full comparison pages for detailed feature matrices.

The table above compares Bernstein against LLM-orchestration frameworks (they orchestrate LLM calls). The table below covers the closer category — other tools that orchestrate CLI coding agents:

Feature Bernstein awslabs/cli-agent-orchestrator ComposioHQ/agent-orchestrator emdash umputun/ralphex
Shape Python CLI + library + MCP server Python CLI + tmux sessions + web UI TypeScript CLI + local dashboard Electron desktop app Go CLI
Primary language Python Python TypeScript TypeScript Go
Install pipx install bernstein uv tool install cli-agent-orchestrator npm install -g @aoagents/ao .dmg / .msi / .AppImage go install / single binary
Agent adapters 40 5 (Kiro, Claude Code, Codex, Gemini, Kimi) 3 (Claude Code, Codex, Aider) 24 1 (Claude Code only)
Parallel multi-agent execution Yes Yes (tmux session per agent) Yes Yes No (single sequential session)
Git worktree per agent Yes No (planned, #100) Yes Yes Optional --worktree flag
MCP server mode (exposes self as MCP) Yes (stdio + HTTP/SSE) Yes (inter-agent comms) No No No
Coordinator Deterministic Python scheduler Hierarchical LLM supervisor LLM-driven Not documented Linear plan executor
HMAC-chained audit replay Yes No No No No
Cross-model verifier / quality gates Yes (multi-stage) No No No Multi-phase review (Claude only)
Autonomous CI-fix / PR flow Yes (bernstein autofix) No Yes No No
Visual dashboard TUI + web Web UI + tmux Web Desktop app Web (--serve)
Notification sinks Telegram/Slack/Discord/Email/Webhook/Shell No No Telegram / Email / Slack / Webhook
Backing Solo OSS AWS Labs Funded (Composio.dev) YC W26 Solo OSS
License Apache 2.0 Apache 2.0 MIT Apache 2.0 MIT

Bernstein's wedge in this category: Python-native, MCP-server-first, widest adapter coverage, true multi-agent parallelism, deterministic scheduler with no LLM in the coordination loop. If you want AWS-aligned tmux-session isolation with a hierarchical LLM supervisor, AWS Labs' cao is a closer fit; if your stack is TypeScript and you want a product with a dashboard, Composio's @aoagents/ao is a better fit; if you want a polished desktop ADE, emdash is; if you only use Claude Code and want a single Go binary that walks a plan top-to-bottom, ralphex is. If you want a primitive that imports into Python, exposes itself over MCP to any client, runs many agents in parallel, and covers the full agent breadth (including Qwen, Goose, Ollama, OpenAI Agents SDK, Cloudflare Agents, and more) — Bernstein.

What people use it for

These are real workflow patterns from Bernstein's own docs, examples, and project surface — not invented customer quotes.

  • Parallel test generation — fan out across untested modules with bernstein -g "Generate unit tests for untested modules in src/" --max-agents 5.
  • CI failure repair — watch open PRs and dispatch scoped fixers with bernstein autofix start --repo your-org/your-repo --foreground.
  • PR review follow-up — turn review comments into tracked fix tasks with bernstein review-responder start --repo your-org/your-repo --foreground.
  • Codebase modernization — run wide refactors like bernstein -g "Migrate callback-based modules in src/ to async/await and update tests" --max-agents 8.
  • Ticket-to-run workflows — import GitHub, Jira, or Linear work directly with bernstein from-ticket https://github.com/your-org/your-repo/issues/123 --run.
  • API-change safety checks — catch downstream breakage before merge with bernstein dep-impact --base main.

See Who Uses Bernstein for the longer version with command examples and notes on when each workflow fits.

Monitoring

bernstein live       # TUI dashboard
bernstein dashboard  # web dashboard
bernstein status     # task summary
bernstein ps         # running agents
bernstein cost       # spend by model/task
bernstein doctor     # pre-flight checks
bernstein recap      # post-run summary
bernstein trace <ID> # agent decision trace
bernstein run-changelog --hours 48  # changelog from agent-produced diffs
bernstein explain <cmd>  # detailed help with examples
bernstein dry-run    # preview tasks without executing
bernstein dep-impact # API breakage + downstream caller impact
bernstein aliases    # show command shortcuts
bernstein config-path    # show config file locations
bernstein init-wizard    # interactive project setup
bernstein debug-bundle   # collect logs, config, and state for bug reports
bernstein skills list    # discoverable skill packs (progressive disclosure)
bernstein skills show <name>  # print a skill body with its references
bernstein fingerprint build --corpus-dir ~/oss-corpus  # build local similarity index
bernstein fingerprint check src/foo.py                 # check generated code against the index

Install

Method Command
One-liner (macOS / Linux) curl -fsSL https://bernstein.run/install.sh | sh
One-liner (Windows) irm https://bernstein.run/install.ps1 | iex
pip pip install bernstein
pipx pipx install bernstein
uv uv tool install bernstein
Homebrew brew tap chernistry/tap && brew install bernstein
Fedora / RHEL sudo dnf copr enable alexchernysh/bernstein && sudo dnf install bernstein
npm (wrapper) npx bernstein-orchestrator

The one-liner scripts check for Python 3.12+, bootstrap pipx when it's missing, fix PATH for the current session, and install (or upgrade) bernstein. They handle brew-managed macOS environments and the Windows py -3 launcher fallback. Script sources: install.sh · install.ps1.

Optional extras

Provider SDKs are optional so the base install stays lean. Pick what you need:

Extra Enables
bernstein[openai] OpenAI Agents SDK v2 adapter (openai_agents)
bernstein[docker] Docker sandbox backend
bernstein[e2b] E2B microVM sandbox backend (needs E2B_API_KEY)
bernstein[modal] Modal sandbox backend, optional GPU (needs MODAL_TOKEN_ID / MODAL_TOKEN_SECRET)
bernstein[s3] S3 artifact sink (via boto3)
bernstein[gcs] Google Cloud Storage artifact sink
bernstein[azure] Azure Blob artifact sink
bernstein[r2] Cloudflare R2 artifact sink (S3-compatible boto3)
bernstein[grpc] gRPC bridge
bernstein[k8s] Kubernetes integrations

Combine extras with brackets, e.g. pip install 'bernstein[openai,docker,s3]'.

Editor extensions: VS Marketplace · Open VSX

Contributing

PRs welcome. See CONTRIBUTING.md for setup and code style.

Support

If Bernstein saves you time: GitHub Sponsors

Contact: forte@bernstein.run

Featured in

Curated lists, newsletters, and peer projects that picked up Bernstein:

More awesome lists & community curation
Cited as prior art by peer projects

Star History

Star History Chart

License

Apache License 2.0


Made with love by Alex Chernysh · GitHub · bernstein.run

Footnotes

  1. AutoGen is in maintenance mode; successor is Microsoft Agent Framework 1.0.