Run Claude Code or Codex with full autonomous permissions inside a Docker container.
Includes cx for lower token usage.
| Action | Link |
|---|---|
| What is this? | The What |
| Install Guide | Install |
| Run & Config Guide | How to Run |
| Security Overview | Security |
| Miscellaneous | Miscellaneous |
“Ever have a 1-minute task take you 10+ minutes because it's stuck waiting on you approving multiple chained commands?”
— I have, many times.
$ cd $WORK_FOLDER
$ dclaude
$ dcodex
# Claude Code or Codex with full permissions;
# No yes/no time wasting.
# Limited. Can read only the current folder and optional whitelisted ones.
“Ever have a 1-minute task take you 10+ minutes because it's stuck waiting on you approving multiple chained commands?”
— I have, many times.
Modern coding harness usage requires many tool calls. They all usually require explicit approval from the user which takes time, breaks your flow and requires more multitasking brainpower than simply letting it YOLO.
But letting it YOLO is dangerous. You're allowing an autonomous system with full access to your computer. It's a security nightmare waiting to happen. Ultimately there's a spectrum where on one side sits max productivity and on the other max security. Here are a few of the ways you can go about this problem:
I tried this, but the harness still finds something stupid to block on.
It also isn't fully secure either.
Better and more secure, but too much work to set up, and is costly.
That's what dclaude is here for! :)
This repo ships two simple, thin Docker wrappers:
dclaude- a wrapper on top of Claude Codedcodex- a wrapper on top of OpenAI's Codex
When ran from a folder, dclaude runs the coding CLI inside a docker container with the folder bind-mounted. Any changes in the container get reflected in the same folder on your computer.
Some features:
- you can configure read-only folders, like
~/Desktopor~/Downloads. The container can never write to them and can only read. - paths are seamless. The container sees the same absolute repo path you see on your computer, so sending the CLI agent an image path like
~/Desktop/image.pngjust works. - auth and caches persist across runs.
- the Docker container comes pre-installed with Python and Node. For more languages, either fork this repo or open a PR.
- Docker Desktop or Docker Engine with
docker - a trusted repo
- Docker Desktop file sharing enabled for the repo path and each configured home mount on macOS
brew install stanislavkozlovski/tap/dclaudeOptionally, configure read-only host directories the container can see:
DCLAUDE_HOME="$(dirname "$(readlink -f "$(which dclaude)")")"
cp "$DCLAUDE_HOME/scripts/dclaude.yaml.example" "$DCLAUDE_HOME/scripts/dclaude.yaml"
# edit dclaude.yaml to add your read-only mountsgit clone https://github.com/stanislavkozlovski/dclaude.git ~/dclaudeAdd to your shell profile (~/.bashrc, ~/.zshrc, etc.):
alias dclaude='~/dclaude/dclaude'
alias dcodex='~/dclaude/dcodex'Optionally, configure read-only host directories the container can see:
cp ~/dclaude/scripts/dclaude.yaml.example ~/dclaude/scripts/dclaude.yaml
# edit dclaude.yaml to add your read-only mountsThe container gets read/write access to your current repo and read-only access to any directories you configure in scripts/dclaude.yaml. Nothing else from your host is exposed.
From the repo you want to edit, run:
cd /path/to/repo
dclaude # or dcodexNote
The very first dcodex/dclaude run can take around 60 seconds while Docker builds the image. Subsequent runs are instant.
If you want GitHub SSH access from inside the container, for example to git push from there, launch with --ssh:
cd /path/to/repo
dclaude --ssh # or dcodex --sshIf you commit in-container but push from the host yourself, you do not need --ssh.
If you prefer, calling the launcher by its full path also works as long as your current shell is already inside the target repo:
/path/to/dclaude-repo/dclaude
/path/to/dclaude-repo/dcodexThe first run builds the shared image automatically from the launcher repo and starts a warm container for the current repo. Later runs docker exec into that warm container unless you pass --reset or --rebuild.
Wrapper options:
--rebuildforces a freshdocker buildand recreates the warm container--resetrecreates the warm container before launching--stopremoves the warm container for the current repo and exits--update-toolchecks the latest upstream version for this wrapper's CLI, rewrites the pin in the launcher repo, and rebuilds the image after confirmation--yesskips the confirmation prompt for--update-tool--sshenables SSH agent forwarding by mounting/run/host-services/ssh-auth.sockand~/.ssh/known_hostswhen available--profile NAME(Codex only) uses a named profile with a separate~/.codex-NAMEconfig directory, giving you isolated auth, state, and skills per profile--list-profiles(Codex only) lists available Codex profiles--versionprints the installed launcher version without requiring Docker or a git repo--passes the remaining arguments to the underlying CLI
Examples:
./dclaude --version
./dcodex -- --help
./dclaude --rebuild
./dcodex --reset
./dcodex --stop
./dcodex --update-tool
./dcodex --update-tool --yes
./dclaude --ssh
./dcodex --ssh
./dcodex --profile magi
./dcodex --profile magi --ssh
./dcodex --list-profilesThe launcher reads a single config file at scripts/dclaude.yaml in the dclaude repo. If no config is present, no extra host directories are mounted beyond the repo itself and shared caches.
To get started, copy the example and edit it:
cp scripts/dclaude.yaml.example scripts/dclaude.yamlConfig shape:
read_only_mounts:
- ~/Desktop
- ~/DownloadsEach entry is an absolute path or starts with ~/ (resolved to $HOME). Entries are bind-mounted read-only at the same absolute path inside the container.
Use read_only_mounts: [] to explicitly mount no extra directories.
Configured mounts that overlap ~/.ssh or /run/host-services are rejected. SSH access is only exposed through --ssh.
SSH is disabled by default.
Pass --ssh to enable GitHub SSH access inside the container.
--ssh is the only supported SSH integration path. It enables forwarded SSH agent access without copying raw private keys into the image.
When enabled, the wrappers mount:
/run/host-services/ssh-auth.sock~/.ssh/known_hostsread-only when present
They do not mount the full ~/.ssh directory and they never copy private key files into the container. Configured read-only mounts that overlap ~/.ssh or /run/host-services are rejected.
Recommended host checks before using --ssh:
ssh-add -L
ssh -T git@github.comIf those host checks fail, --ssh inside the container will fail too.
Recommended container checks after launching with --ssh:
echo "$SSH_AUTH_SOCK"
ssh-add -L
ssh -T git@github.comUse a dedicated GitHub key loaded into a dedicated ssh-agent if you want a narrower blast radius.
This setup is for trusted repos. Anything reachable inside the container is reachable by the agent.
Deliberately omitted:
docker.sock--privileged- full home-directory mounts
- configured mounts that overlap
~/.sshor/run/host-services - copied SSH key files
- API-key auth shortcuts
The Docker build context also ignores any repo-local .ssh directory.
Verify configured read-only mounts by trying to write to them inside the container:
touch ~/Desktop/test # should fail if ~/Desktop is in your dclaude.yamlWriting in the repo should still succeed.
Auth state is mounted from the host, not copied.
Every bind mount is listed below. If the access column says read/write, edits made inside the container modify the real host path and persist after the container stops.
| Host path | Container path | Scope | Access in container | Host persistence | Purpose |
|---|---|---|---|---|---|
| target repo root | same absolute path | all launches | read/write | yes | live working tree |
scripts/container-launch.sh in this repo |
/usr/local/bin/dclaude-container-launch |
all launches | read-only | no | bootstrap entrypoint |
~/.cache/dclaude/cx |
~/.cache/cx |
all launches | read/write | yes | cx cache |
~/.cache/pip |
~/.cache/pip |
all launches | read/write | yes | pip cache |
~/.cache/uv |
~/.cache/uv |
all launches | read/write | yes | uv cache |
configured read-only mounts from scripts/dclaude.yaml |
same absolute path | all launches | read-only | no | extra host files exposed to the container |
~/.claude |
~/.claude |
Claude only | read/write | yes | Claude auth and state |
~/.claude.json |
~/.claude.json |
Claude only | read/write | yes | Claude auth file |
~/.config/claude-code |
~/.config/claude-code |
Claude only | read/write | yes | Claude CLI config |
~/.codex (or ~/.codex-NAME with --profile) |
same path | Codex only | read/write | yes | Codex auth, state, and skills |
/run/host-services/ssh-auth.sock |
same absolute path | only with --ssh |
SSH agent socket passthrough | host agent is used directly | lets container processes authenticate through the host agent without copying keys |
~/.ssh/known_hosts |
~/.ssh/known_hosts |
only with --ssh when present |
read-only | no | host key verification |
Anything not listed in the table above is not bind-mounted from the host. Writes to those paths stay container-local and disappear when the warm container is removed or reset.
The launcher may also seed missing guidance into the mounted home directories:
~/.claude/CX.md~/.claude/CLAUDE.md~/.codex/AGENTS.md~/.codex/skills/dclaude-cx-navigation
When using --profile NAME, the profile directory ~/.codex-NAME is used instead of ~/.codex. On first use, AGENTS.md and skills are copied from the default ~/.codex if they exist there.
Interactive login happens through the official CLIs inside the container. If you are not logged in yet, run the wrapper and complete the normal login flow there. The mounted state keeps you logged in across container restarts.
API-key auth is intentionally NOT supported for both tools.
Primary docs live under docs/:
Every launch does the following:
- resolves the launcher repo from the wrapper script path
- resolves the target repo from your current shell location
- ensures a warm per-tool container exists for that target repo
- bind-mounts the launcher bootstrap script into the container so launcher script changes take effect on warm-container recreation
- bind-mounts the repo read/write at its real host path
- bind-mounts configured read-only directories at the same path (none by default; see
scripts/dclaude.yaml.example) - bind-mounts a persistent container-specific
cxcache - runs as the current host UID/GID
- sets
HOMEto the host home path docker execs into the current host working directory inside the warm container- creates
/workspaceas a compatibility alias that resolves back to the repo root
This means:
pwdinside the container matches the target repo path on the host- the Docker build context comes from the launcher repo, not the target repo
- the default fast path is warm-container reuse, not a fresh
docker run --rm
The shared image includes ind-igo/cx and installs it into your claude/codex skills, so your agent automatically uses cx.
Why it exists: agents burn a ton of tokens on reads. cx reduces that by ~60%.
Persistent cx cache:
- host path:
~/.cache/dclaude/cx - container path:
~/.cache/cx
Agent SKILL.md integration:
- if
~/.claude/CX.mdis missing, the launcher writes it fromcx skill - if
~/.claude/CLAUDE.mdis missing, the launcher creates it with@CX.md - if
~/.claude/CLAUDE.mdexists but does not reference@CX.mdor already containcxguidance, the launcher appends@CX.md - if
~/.codex/AGENTS.mdis missingcxguidance, the launcher writes or appends it fromcx skill(same for~/.codex-NAMEwith profiles) - if
~/.codex/skills/dclaude-cx-navigationis missing, the launcher seeds it fromcx skill(same for profiles)
By default the wrappers build and reuse dclaude:<version>, where <version> comes from docs/VERSION. DCLAUDE_IMAGE_NAME still overrides that default when you need a custom tag. --rebuild also recreates the warm container so the new image is actually used.
Rebuild explicitly when you want updated pinned tool versions or image changes for the current release tag:
docker build -t "dclaude:$(cat docs/VERSION)" .
./dclaude --rebuildThe image currently pins the installed CLI versions:
@anthropic-ai/claude-code@2.1.119@openai/codex@0.125.0cx 0.6.5
The Codex full-access launcher was validated against codex-cli 0.125.0, which supports --dangerously-bypass-approvals-and-sandbox.
Pinned upstream tool versions live in Dockerfile, which controls the globally installed CLIs inside the image. Check or refresh them manually with:
python3 scripts/sync_tool_versions.py
python3 scripts/sync_tool_versions.py --updateWrapper-driven refresh:
dcodex --update-tool
dclaude --update-tool--update-tool checks the latest upstream version for the wrapper's own CLI, shows a confirmation prompt, rewrites the launcher pin and docs, rebuilds the image, and lets warm containers recreate automatically on their next launch. Use --yes to skip the confirmation prompt for scripted or non-interactive runs.
A scheduled GitHub Actions workflow runs the same updater, validates that the image still builds, and opens a PR when upstream @anthropic-ai/claude-code, @openai/codex, or cx releases move.
Release shape:
docs/VERSIONis the source of truth for the launcher version- CI on pull requests and
mainrunsshellcheck,bash -n,docker build,--help, and--version - a scheduled tool-update workflow refreshes pinned upstream tool versions and opens a PR when updates are available
- a successful non-bot push to
mainbumps the patch version, commitschore: release vX.Y.Z, and pushes the matchingvX.Y.Ztag - the tag workflow creates
dclaude-vX.Y.Z.tar.gz, publishes the GitHub Release, and updatesstanislavkozlovski/homebrew-tapwhenHOMEBREW_TAP_TOKENis configured
Homebrew installs both wrappers from the same release tarball and exposes both dclaude and dcodex.


