This repository contains an automated setup script for installing and configuring Codex CLI to work with AMD's LLM Gateway.
Codex CLI is OpenAI's command-line tool for AI-assisted coding. This setup script configures it to use AMD's LLM Gateway, supporting multiple model profiles including o3, GPT-5, Claude Sonnet 4, and Gemini 2.5 Pro.
- 🚀 Automated Installation: Interactive setup with confirmation prompts
- 🔧 Multi-Profile Configuration: Support for o3, GPT-5, Claude, and Gemini models
- 📁 Portable Installation: Installs locally in current directory
- 🛠 Shell Integration: Automatic shell alias setup for bash, csh, and tcsh
- ✅ Dependency Checking: Validates Node.js and npm availability
- Node.js 18+: Required for Codex CLI
- Install from https://nodejs.org/en/download
- npm: Node package manager (usually included with Node.js)
- AMD LLM API Key: Valid API key for AMD's LLM Gateway
- Linux/Unix Environment: Script designed for bash shell environments
-
Clone or download this repository
-
Run the setup script:
./setup-codex-cli.sh
-
Set your API key in your shell config file (automatically added):
- For bash: Edit
~/.bashrc - For csh: Edit
~/.cshrc - For tcsh: Edit
~/.tcshrc
Replace
your-api-key-herewith your actual API key. - For bash: Edit
-
Start using Codex:
codex
The setup script provides:
- Interactive Mode (default): Prompts for confirmation before installation
- Auto-confirm Mode: Use
--yesor-yflag to skip prompts
Both modes install Codex CLI in the current directory and create shell aliases for easy access.
The setup creates multiple model profiles in ~/.codex/config.toml:
- o3 (default): Uses AMD Gateway with responses API
- gpt5: GPT-5 via AMD Gateway with responses API
- claude: Claude Sonnet 4 via Babel local gateway
- gemini: Gemini 2.5 Pro via Babel local gateway
Switch profiles using:
codex --profile gpt5
codex --profile claude
codex --profile geminiSet your AMD LLM API key:
For bash users (add to ~/.bashrc):
export AMD_LLM_API_KEY='your-api-key-here'For csh/tcsh users (add to ~/.cshrc or ~/.tcshrc):
setenv AMD_LLM_API_KEY 'your-api-key-here'- AMD Gateway:
https://llm-api.amd.com/openai/{model} - Babel Local Gateway:
http://localhost:5000/v1(optional, for Claude/Gemini)
# Start interactive Codex session with default (o3) model
codex
# Use a specific profile
codex --profile gpt5
codex --profile claude
# Get help
codex --help
# Check version
codex --versioncodex/
├── setup-codex-cli.sh # Main setup script
├── .npmrc # npm configuration (created during setup)
├── package.json # NPM dependencies (created during setup)
└── README.md # This file
- Directory Confirmation: Verifies installation location
- Environment Setup: Loads module system (if available)
- Node.js Validation: Checks for Node.js 18+ and npm
- npm Configuration: Sets up local npm directories
- Package Installation: Installs
@openai/codexvia npm - Configuration: Creates
~/.codex/config.tomlwith multi-profile setup - Shell Integration: Adds aliases and API key placeholders to shell configs
If you get Node.js version errors, install Node.js 18+ from nodejs.org or use a version manager like nvm.
Verify your AMD_LLM_API_KEY is set correctly:
echo $AMD_LLM_API_KEYTo use Claude or Gemini profiles, ensure Babel gateway is running on localhost:5000.
The script attempts to use the Pandora module system if available. If not found, it falls back to checking system PATH for Node.js.
- Uses AMD Gateway responses API
- Full tool/function calling support
- No temperature parameter support (uses default 1.0)
- Use
max_tokensinstead ofmax_completion_tokens - Full tool/function calling support
- Requires Babel local gateway
- Uses chat API format
- Requires Babel local gateway
- Uses chat API format