chore: avoid local AiToEarn port conflicts#508
Conversation
There was a problem hiding this comment.
Code Review
This pull request introduces a local development environment for AiToEarn that integrates with a local Codex API. Key additions include a Node.js proxy script to manage OpenAI-compatible requests, a bash script for automated startup, and comprehensive documentation. The Docker Compose configuration has also been updated to support dynamic port mapping and the new proxy. Review feedback identifies several technical improvements for the proxy script, such as fixing potential URL duplication, correctly handling response headers to prevent truncated responses, and ensuring the proxy is accessible from within Docker containers by listening on all interfaces.
| return | ||
| } | ||
|
|
||
| const targetUrl = `${codexBaseUrl}${url.pathname}${url.search}` |
There was a problem hiding this comment.
The current string concatenation will cause the /v1 prefix to be duplicated if codexBaseUrl already includes it (e.g., http://127.0.0.1:52031/v1 + /v1/chat/completions becomes .../v1/v1/chat/completions). Using the URL constructor handles this correctly by resolving the absolute path against the base URL's origin.
| const targetUrl = `${codexBaseUrl}${url.pathname}${url.search}` | |
| const targetUrl = new URL(url.pathname + url.search, codexBaseUrl).toString() |
| } | ||
|
|
||
| const upstream = await fetch(targetUrl, { method: req.method, headers, body }) | ||
| res.writeHead(upstream.status, Object.fromEntries(upstream.headers.entries())) |
There was a problem hiding this comment.
Forwarding all upstream headers blindly can cause issues. Specifically, if the proxy modified the request body or if fetch transparently decompressed the response, the content-length header from the upstream will be incorrect for the response sent to the client. This can lead to truncated responses or hanging connections. It's safer to exclude hop-by-hop headers and those related to body encoding/length.
const responseHeaders = {}
for (const [name, value] of upstream.headers.entries()) {
if (!['content-length', 'content-encoding', 'transfer-encoding', 'connection'].includes(name.toLowerCase())) {
responseHeaders[name] = value
}
}
res.writeHead(upstream.status, responseHeaders)| catch (error) { | ||
| responseJson(res, 500, { error: error instanceof Error ? error.message : String(error) }) | ||
| } | ||
| }).listen(port, '127.0.0.1', () => { |
There was a problem hiding this comment.
Listening on 127.0.0.1 makes the proxy inaccessible from inside Docker containers, even when using host.docker.internal. On most systems (especially macOS/Windows with Docker Desktop), host.docker.internal resolves to an IP that can only reach services listening on all interfaces (0.0.0.0) or the specific bridge interface.
| }).listen(port, '127.0.0.1', () => { | |
| }).listen(port, '0.0.0.0', () => { |
| function readCodexBaseUrl() { | ||
| const configPath = join(homedir(), '.codex', 'config.toml') | ||
| const config = readFileSync(configPath, 'utf8') | ||
| const section = config.match(new RegExp(`\\[model_providers\\.${providerName}\\]([\\s\\S]*?)(?:\\n\\[|$)`)) |
There was a problem hiding this comment.
Parsing TOML with a regular expression is fragile and can break if the configuration file format changes slightly (e.g., adding spaces around brackets, comments on the same line, or using different quoting styles). While acceptable for a simple local script, consider using a proper TOML parser if this configuration becomes more complex.
Summary
18080, RustFS proxy19000, RustFS console19001, MongoDB27018, Redis6380.http://localhost:18080.Verification
bash -n scripts/start-local-codex.shnode --check scripts/codex-openai-proxy.mjs/Applications/Docker.app/Contents/Resources/bin/docker compose -f docker-compose.yml -f docker-compose.codex.yml config --quietgit diff --check HEAD~1..HEAD18080,19000,19001,27018,6380, and updatedASSETS_CONFIG/RELAY_CALLBACK_URLvalues.curl -fsS http://127.0.0.1:52032/healthreturnedok=true,model=gpt-5.5,reasoningEffort=xhigh.Runtime note
Full container startup could not be completed on the current machine because Docker Desktop is the Intel build on Apple Silicon. Docker Desktop reports:
This is the Intel version of Docker Desktop; the docs already call out the need for the arm64 build.