Skip to content

Oleks-Y/cc-codex-oauth-proxy

Repository files navigation

Claude Code → Codex OAuth Proxy

A high-performance proxy server that allows Claude Code to use ChatGPT backend API via Codex OAuth credentials.

Experimental Notice

This project is an experiment and is provided as-is with no guarantees. Upstream APIs may change at any time and break compatibility.

Features

  • 🚀 FastAPI runtime - Async Python server with clear request routing
  • 🔐 OAuth token management - Automatic token refresh from ~/.codex/auth.json
  • 🔄 API translation - Anthropic Messages API → OpenAI Responses API (via LiteLLM adapter)
  • 📊 Structured logging - Structlog with JSON output
  • 📈 Distributed tracing - OpenTelemetry integration
  • Type-safe - Pydantic validation for all API calls
  • ⚙️ CLI-configurable - Ports, hosts, model mappings via CLI args

Quick Start

1. Install dependencies

uv sync

2. Ensure Codex authentication

Make sure you're authenticated with Codex CLI:

codex login

This creates ~/.codex/auth.json with OAuth tokens.

3. Start the proxy

# Basic usage
uv run cc-codex-oauth-proxy

# Custom configuration
uv run cc-codex-oauth-proxy \
  --port 8082 \
  --host 127.0.0.1 \
  --model-mapping claude-3-5-sonnet-20241022=gpt-5.2-codex

4. Configure Claude Code

export ANTHROPIC_BASE_URL=http://localhost:8082
export ANTHROPIC_API_KEY=dummy

5. Test it

claude "Say hello"

Run Instructions

Local Run (recommended)

# Start the proxy
LOG_LEVEL=info uv run cc-codex-oauth-proxy

In another terminal:

# Point Claude Code at the proxy
export ANTHROPIC_BASE_URL=http://localhost:8082
export ANTHROPIC_API_KEY=dummy

# Verify tool usage via a file read
claude -p "Read pyproject.toml"

Custom Run

uv run cc-codex-oauth-proxy \
  --host 127.0.0.1 \
  --port 8082 \
  --model-mapping claude-sonnet-4-5-20250929=gpt-5.2-codex

Configuration

CLI Arguments

--port <number>              Server port (default: 8082)
--host <string>              Bind address (default: 127.0.0.1)
--codex-config <path>        Path to auth.json (default: ~/.codex/auth.json)
--backend-url <url>          Backend API URL
--model-mapping <key=value>  Model mapping (repeatable)
--log-level <level>          Log level: trace|debug|info|warn|error
--otel-endpoint <url>        OpenTelemetry endpoint

Environment Variables

See .env.example for all available environment variables.

Architecture

Claude Code → Proxy (localhost:8082) → ChatGPT Backend
                ↓
         [Anthropic format → OpenAI Responses format]
         [Uses Codex OAuth token]

Observability

Wide Events Pattern

This proxy implements the wide events pattern for superior observability:

  • One comprehensive event per request - not scattered log lines
  • High cardinality, high dimensionality - 50+ attributes capturing complete context
  • Business + technical context - models, tokens, function calls, performance metrics
  • Queryable structure - enables sophisticated filtering and analytics

Example wide event attributes:

request_id, request_timestamp, anthropic_model, anthropic_message_count,
translation_source_model, translation_target_model, codex_model,
codex_tools (JSON), codex_function_calls (JSON), codex_api_duration_ms,
anthropic_response_id, anthropic_input_tokens, anthropic_output_tokens,
total_duration_ms, success, error_type

Query examples in Jaeger:

  • "All requests with function calls": codex_function_calls != "[]"
  • "Slow requests (>1s)": total_duration_ms > 1000
  • "Model mapping failures": translation_model_mapped = false

Distributed Tracing (OpenTelemetry)

# Start Jaeger
docker run -d --name jaeger \
  -p 4318:4318 \
  -p 16686:16686 \
  jaegertracing/all-in-one:latest

# Run proxy with tracing
  uv run cc-codex-oauth-proxy --otel-endpoint http://localhost:4318

# View traces
open http://localhost:16686

Trace Structure:

  • One span per request: handle_messages_request
  • All metadata accumulated as span attributes
  • Function calls and tools stored as JSON arrays
  • Complete end-to-end visibility

Model Mapping

Default mappings:

Claude Model GPT Model
claude-sonnet-4-5-20250929 gpt-5.2-codex
claude-haiku-4-5-20250929 gpt-5.1-codex-mini
claude-haiku-4-5-20251001 gpt-5.1-codex-mini
claude-3-5-sonnet-20241022 gpt-5.2-codex
claude-3-5-haiku-20241022 gpt-5.1-codex-mini
claude-3-opus-20240229 gpt-5.2

Override via CLI:

uv run cc-codex-oauth-proxy \
  --model-mapping claude-sonnet-4-5-20250929=gpt-5.2-codex \
  --model-mapping claude-haiku-4-5-20250929=gpt-5.1-codex-mini

Security

  • Binds to 127.0.0.1 by default (localhost only)
  • Tokens redacted in logs (only last 8 chars shown)
  • No data persisted or logged to disk
  • Uses existing Codex OAuth credentials

Troubleshooting

"Failed to read Codex auth"

Run codex login to authenticate.

Token expired

Proxy auto-refreshes tokens. If refresh fails, re-run codex login.

Streaming doesn't work

Check that network doesn't buffer SSE responses.

Development

# Basic run while developing
uv run cc-codex-oauth-proxy

License

MIT

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages