Skip to content

Latest commit

 

History

History
324 lines (229 loc) · 11.1 KB

File metadata and controls

324 lines (229 loc) · 11.1 KB

OpenClaude Advanced Setup

This guide is for users who want source builds, Bun workflows, provider profiles, diagnostics, or more control over runtime behavior.

Install Options

Option A: npm

npm install -g @gitlawb/openclaude

Option B: From source with Bun

Use Bun 1.3.13 or newer for source builds on Windows. Older Bun versions can fail during bun run build.

git clone https://github.com/Gitlawb/openclaude.git
cd openclaude

bun install
bun run build
npm link

Option C: Run directly with Bun

git clone https://github.com/Gitlawb/openclaude.git
cd openclaude

bun install
bun run dev

Provider Examples

OpenAI

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-...
export OPENAI_MODEL=gpt-4o

Codex via ChatGPT auth

codexplan maps to GPT-5.5 on the Codex backend with high reasoning. codexspark maps to GPT-5.3 Codex Spark for faster loops.

If you use the in-app provider wizard, choose Codex OAuth to open ChatGPT sign-in in your browser and let OpenClaude store Codex credentials securely.

If you already use the Codex CLI, OpenClaude reads ~/.codex/auth.json automatically. You can also point it elsewhere with CODEX_AUTH_JSON_PATH or override the token directly with CODEX_API_KEY.

If you set CODEX_API_KEY manually and are not relying on auth.json or stored Codex OAuth credentials, also set CHATGPT_ACCOUNT_ID (or CODEX_ACCOUNT_ID).

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_MODEL=codexplan

# optional if you do not already have ~/.codex/auth.json
export CODEX_API_KEY=...
export CHATGPT_ACCOUNT_ID=...

openclaude

DeepSeek

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-...
export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_MODEL=deepseek-v4-flash

Use deepseek-v4-pro when you want the stronger model. deepseek-chat and deepseek-reasoner remain available as DeepSeek's legacy API aliases.

Google Gemini

export CLAUDE_CODE_USE_GEMINI=1
export GEMINI_API_KEY=...
export GEMINI_MODEL=gemini-3-flash-preview

Gemini via OpenRouter

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-or-...
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
export OPENAI_MODEL=google/gemini-2.5-pro

OpenRouter model availability changes over time. If a model stops working, try another current OpenRouter model before assuming the integration is broken.

Ollama

ollama pull llama3.3:70b

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3.3:70b

Atomic Chat (local, Apple Silicon)

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://127.0.0.1:1337/v1
export OPENAI_MODEL=your-model-name

No API key is needed for Atomic Chat local models.

Or use the profile launcher:

bun run dev:atomic-chat

Download Atomic Chat from atomic.chat. The app must be running with a model loaded before launching.

LM Studio

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-name

Together AI

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=...
export OPENAI_BASE_URL=https://api.together.xyz/v1
export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo

Groq

export CLAUDE_CODE_USE_OPENAI=1
export GROQ_API_KEY=gsk_...
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
export OPENAI_MODEL=llama-3.3-70b-versatile

GROQ_API_KEY matches the built-in Groq gateway preset. OPENAI_API_KEY also works as a fallback on the generic OpenAI-compatible path, but GROQ_API_KEY is the preferred variable for Groq-specific setup.

Xiaomi MiMo

export CLAUDE_CODE_USE_OPENAI=1
export MIMO_API_KEY=...
export OPENAI_BASE_URL=https://api.xiaomimimo.com/v1
export OPENAI_MODEL=mimo-v2.5-pro

The /provider Xiaomi MiMo preset uses the same endpoint and stores the key as MIMO_API_KEY. OPENAI_API_KEY also works as a compatibility fallback, but MIMO_API_KEY keeps the profile tied to the MiMo route.

Cloudflare Workers AI

export CLAUDE_CODE_USE_OPENAI=1
export CLOUDFLARE_API_TOKEN=...
export OPENAI_BASE_URL=https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/ai/v1
export OPENAI_MODEL=@cf/meta/llama-3.3-70b-instruct-fp8-fast

Replace <ACCOUNT_ID> with your Cloudflare account id (visible in the Cloudflare dashboard URL). OPENAI_API_KEY also works as a compatibility fallback, but CLOUDFLARE_API_TOKEN keeps the profile tied to the Cloudflare preset. The /provider Cloudflare Workers AI preset stores the token under CLOUDFLARE_API_TOKEN.

Mistral

export CLAUDE_CODE_USE_MISTRAL=1
export MISTRAL_API_KEY=...
export MISTRAL_MODEL=devstral-latest

Azure OpenAI

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=your-azure-key
export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
export OPENAI_MODEL=gpt-4o

Environment Variables

Variable Required Description
CLAUDE_CODE_USE_OPENAI OpenAI-compatible only Set to 1 to enable the OpenAI-compatible provider path
OPENAI_API_KEY OpenAI-compatible cloud routes* Your API key (* not needed for local models like Ollama, LM Studio, Atomic Chat, or other local OpenAI-compatible proxies)
OPENAI_MODEL OpenAI-compatible only Model name such as gpt-4o, deepseek-v4-flash, or llama3.3:70b
OPENAI_BASE_URL No API endpoint, defaulting to https://api.openai.com/v1
OPENAI_API_BASE No Compatibility alias for OPENAI_BASE_URL
MIMO_API_KEY Xiaomi MiMo route Xiaomi MiMo API key for https://api.xiaomimimo.com/v1; mirrored into the OpenAI-compatible auth env when the MiMo route is active
CLAUDE_CODE_USE_GEMINI Gemini only Set to 1 to enable the direct Gemini provider path
GEMINI_API_KEY / GOOGLE_API_KEY Gemini API-key auth Gemini API key for direct Gemini setup
GEMINI_MODEL Gemini only Model name such as gemini-3-flash-preview or gemini-2.5-pro
GEMINI_BASE_URL No Override the Gemini base URL
CLAUDE_CODE_USE_MISTRAL Mistral only Set to 1 to enable the dedicated Mistral provider path
MISTRAL_API_KEY Mistral only Mistral API key
MISTRAL_MODEL Mistral only Model name such as devstral-latest
MISTRAL_BASE_URL No Override the Mistral base URL
CODEX_API_KEY Codex only Codex or ChatGPT access token override
CHATGPT_ACCOUNT_ID / CODEX_ACCOUNT_ID Codex only Required for manual Codex env setup when the account id is not coming from auth.json or stored OAuth credentials
CODEX_AUTH_JSON_PATH Codex only Path to a Codex CLI auth.json file
CODEX_HOME Codex only Alternative Codex home directory
OPENCLAUDE_DISABLE_CO_AUTHORED_BY No Suppress the default Co-Authored-By trailer in generated git commits
OPENCLAUDE_LOG_TOKEN_USAGE No When truthy (e.g. verbose), emits one JSON line on stderr per API request with input/output/cache tokens and the resolved provider. User-facing debug output — complements the REPL display controlled by /config showCacheStats. Distinct from CLAUDE_CODE_ENABLE_TOKEN_USAGE_ATTACHMENT, which is model-facing (injects context usage info into the prompt itself). Both can run together.

Model env vars are provider-scoped: Anthropic-native sessions read ANTHROPIC_MODEL, OpenAI-compatible sessions read OPENAI_MODEL, Gemini reads GEMINI_MODEL, and Mistral reads MISTRAL_MODEL.

Runtime Hardening

Use these commands to validate your setup and catch mistakes early:

# quick startup sanity check
bun run smoke

# validate provider env + reachability
bun run doctor:runtime

# print machine-readable runtime diagnostics
bun run doctor:runtime:json

# persist a diagnostics report to reports/doctor-runtime.json
bun run doctor:report

# full local hardening check (smoke + runtime doctor)
bun run hardening:check

# strict hardening (includes project-wide typecheck)
bun run hardening:strict

Notes:

  • doctor:runtime fails fast if CLAUDE_CODE_USE_OPENAI=1 with a placeholder key or a missing key for non-local providers.
  • doctor:runtime also validates the dedicated Gemini and Mistral env paths when CLAUDE_CODE_USE_GEMINI=1 or CLAUDE_CODE_USE_MISTRAL=1.
  • Local providers such as http://localhost:11434/v1, http://10.0.0.1:11434/v1, and http://127.0.0.1:1337/v1 can run without OPENAI_API_KEY.
  • Codex profiles validate CODEX_API_KEY or the Codex CLI auth file and probe POST /responses instead of GET /models.

Provider Launch Profiles

Use profile launchers to avoid repeated environment setup:

# one-time profile bootstrap (prefer viable local Ollama, otherwise OpenAI)
bun run profile:init

# preview the best provider/model for your goal
bun run profile:recommend -- --goal coding --benchmark

# auto-apply the best available local/openai provider/model for your goal
bun run profile:auto -- --goal latency

# codex bootstrap (defaults to codexplan and ~/.codex/auth.json)
bun run profile:codex

# openai bootstrap with explicit key
bun run profile:init -- --provider openai --api-key sk-...

# gemini bootstrap with explicit key
bun run profile:init -- --provider gemini --api-key ...

# ollama bootstrap with custom model
bun run profile:init -- --provider ollama --model llama3.1:8b

# ollama bootstrap with intelligent model auto-selection
bun run profile:init -- --provider ollama --goal coding

# atomic-chat bootstrap (auto-detects running model)
bun run profile:init -- --provider atomic-chat

# codex bootstrap with a fast model alias
bun run profile:init -- --provider codex --model codexspark

# launch using persisted user-level provider profile
bun run dev:profile

# codex profile (uses CODEX_API_KEY or ~/.codex/auth.json)
bun run dev:codex

# OpenAI profile (uses the saved OpenAI profile, or OPENAI_API_KEY from your shell)
bun run dev:openai

# Gemini profile (uses the saved Gemini profile, or GEMINI_API_KEY / GOOGLE_API_KEY from your shell)
bun run dev:gemini

# Ollama profile (defaults: localhost:11434, llama3.1:8b)
bun run dev:ollama

# Atomic Chat profile (Apple Silicon local LLMs at 127.0.0.1:1337)
bun run dev:atomic-chat

profile:recommend ranks installed Ollama models for latency, balanced, or coding, and profile:auto can persist the recommendation directly.

If no profile exists yet, dev:profile uses the same goal-aware defaults when picking the initial model.

Use --provider ollama when you want a local-only path. Auto mode falls back to OpenAI when no viable local chat model is installed.

Use --provider atomic-chat when you want Atomic Chat as the local Apple Silicon provider.

Use profile:codex or --provider codex when you want the ChatGPT Codex backend.

dev:openai, dev:gemini, dev:ollama, dev:atomic-chat, and dev:codex run doctor:runtime first and only launch the app if checks pass.

For dev:ollama, make sure Ollama is running locally before launch.

For dev:atomic-chat, make sure Atomic Chat is running with a model loaded before launch.