|
| 1 | +--- |
| 2 | +title: "OpenAI Codex" |
| 3 | +sidebarTitle: "Codex" |
| 4 | +description: "Use OpenAI Codex CLI and SDK with Helicone AI Gateway to log your coding agent interactions." |
| 5 | +"twitter:title": "OpenAI Codex Integration - Helicone OSS LLM Observability" |
| 6 | +iconType: "solid" |
| 7 | +--- |
| 8 | + |
| 9 | +import { strings } from "/snippets/strings.mdx"; |
| 10 | + |
| 11 | +<Info> |
| 12 | +This integration uses the [AI Gateway](/gateway/overview), which provides a unified API for multiple LLM providers. The AI Gateway is currently in beta. |
| 13 | +</Info> |
| 14 | + |
| 15 | +## CLI Integration |
| 16 | + |
| 17 | +<Steps> |
| 18 | + <Step title={strings.generateKey}> |
| 19 | + <div dangerouslySetInnerHTML={{ __html: strings.generateKeyInstructions }} /> |
| 20 | + </Step> |
| 21 | + |
| 22 | + <Step title="Configure Codex config file"> |
| 23 | + Update your `$CODEX_HOME/.codex/config.toml` file to include the Helicone provider configuration: |
| 24 | + |
| 25 | + <Note> |
| 26 | + `$CODEX_HOME` is typically `~/.codex` on Mac or Linux. |
| 27 | + </Note> |
| 28 | + |
| 29 | + ```toml config.toml |
| 30 | + model_provider = "helicone" |
| 31 | + |
| 32 | + [model_providers.helicone] |
| 33 | + name = "Helicone" |
| 34 | + base_url = "https://ai-gateway.helicone.ai/v1" |
| 35 | + env_key = "HELICONE_API_KEY" |
| 36 | + wire_api = "chat" |
| 37 | + ``` |
| 38 | + </Step> |
| 39 | + |
| 40 | + <Step title="Set your Helicone API key"> |
| 41 | + Set the `HELICONE_API_KEY` environment variable: |
| 42 | + |
| 43 | + ```bash |
| 44 | + export HELICONE_API_KEY=<your-helicone-api-key> |
| 45 | + ``` |
| 46 | + </Step> |
| 47 | + |
| 48 | + <Step title="Run Codex with Helicone"> |
| 49 | + Use Codex as normal. Your requests will automatically be logged to Helicone: |
| 50 | + |
| 51 | + ```bash |
| 52 | + # If you set model_provider in config.toml |
| 53 | + codex "What files are in the current directory?" |
| 54 | + |
| 55 | + # Or specify the provider explicitly |
| 56 | + codex -c model_provider="helicone" "What files are in the current directory?" |
| 57 | + ``` |
| 58 | + </Step> |
| 59 | + |
| 60 | + <Step title={strings.verifyInHelicone}> |
| 61 | + <div dangerouslySetInnerHTML={{ __html: strings.verifyInHeliconeDesciption("Codex CLI") }} /> |
| 62 | + </Step> |
| 63 | +</Steps> |
| 64 | + |
| 65 | +## SDK Integration |
| 66 | + |
| 67 | +<Steps> |
| 68 | + <Step title={strings.generateKey}> |
| 69 | + <div dangerouslySetInnerHTML={{ __html: strings.generateKeyInstructions }} /> |
| 70 | + </Step> |
| 71 | + |
| 72 | + <Step title="Install the Codex SDK"> |
| 73 | + ```bash |
| 74 | + npm install @openai/codex-sdk |
| 75 | + ``` |
| 76 | + </Step> |
| 77 | + |
| 78 | + <Step title="Configure the SDK with Helicone"> |
| 79 | + Initialize the Codex SDK with the AI Gateway base URL: |
| 80 | + |
| 81 | + ```typescript |
| 82 | + import { Codex } from "@openai/codex-sdk"; |
| 83 | + |
| 84 | + const codex = new Codex({ |
| 85 | + baseUrl: "https://ai-gateway.helicone.ai/v1", |
| 86 | + apiKey: process.env.HELICONE_API_KEY, |
| 87 | + }); |
| 88 | + |
| 89 | + const thread = codex.startThread(); |
| 90 | + const turn = await thread.run("What files are in the current directory?"); |
| 91 | + |
| 92 | + console.log(turn.finalResponse); |
| 93 | + console.log(turn.items); |
| 94 | + ``` |
| 95 | + |
| 96 | + <Note> |
| 97 | + The Codex SDK doesn't currently support specifying the wire API, so it will use the Responses API by default. This works with the AI Gateway with limited model and provider support. See the [Responses API documentation](/gateway/concepts/responses-api) for more details. |
| 98 | + </Note> |
| 99 | + </Step> |
| 100 | + |
| 101 | + <Step title={strings.verifyInHelicone}> |
| 102 | + <div dangerouslySetInnerHTML={{ __html: strings.verifyInHeliconeDesciption("Codex SDK") }} /> |
| 103 | + </Step> |
| 104 | +</Steps> |
| 105 | + |
| 106 | +## Additional Features |
| 107 | + |
| 108 | +Once integrated with Helicone AI Gateway, you can take advantage of: |
| 109 | + |
| 110 | +- **Unified Observability**: Monitor all your Codex usage alongside other LLM providers |
| 111 | +- **Cost Tracking**: Track costs across different models and providers |
| 112 | +- **Custom Properties**: Add metadata to your requests for better organization |
| 113 | +- **Rate Limiting**: Control usage and prevent abuse |
| 114 | + |
| 115 | +## {strings.relatedGuides} |
| 116 | + |
| 117 | +<CardGroup cols={2}> |
| 118 | + <Card |
| 119 | + title="AI Gateway Overview" |
| 120 | + icon="book-open" |
| 121 | + href="/gateway/overview" |
| 122 | + iconType="light" |
| 123 | + vertical |
| 124 | + > |
| 125 | + Learn more about Helicone's AI Gateway and its features |
| 126 | + </Card> |
| 127 | + <Card |
| 128 | + title="Responses API Support" |
| 129 | + icon="code" |
| 130 | + href="/gateway/concepts/responses-api" |
| 131 | + iconType="light" |
| 132 | + vertical |
| 133 | + > |
| 134 | + Use the OpenAI Responses API format through Helicone AI Gateway |
| 135 | + </Card> |
| 136 | + <Card |
| 137 | + title="Provider Routing" |
| 138 | + icon="route" |
| 139 | + href="/gateway/provider-routing" |
| 140 | + iconType="light" |
| 141 | + vertical |
| 142 | + > |
| 143 | + Configure automatic routing and fallbacks for reliability |
| 144 | + </Card> |
| 145 | + <Card |
| 146 | + title="Custom Properties" |
| 147 | + icon="tag" |
| 148 | + href="/features/advanced-usage/custom-properties" |
| 149 | + iconType="light" |
| 150 | + vertical |
| 151 | + > |
| 152 | + Add metadata to your requests for better tracking and organization |
| 153 | + </Card> |
| 154 | +</CardGroup> |
0 commit comments