Skip to content

chore: avoid local AiToEarn port conflicts#508

Closed
Dimon94 wants to merge 1 commit into
yikart:mainfrom
Dimon94:fix/local-codex-nonconflicting-ports
Closed

chore: avoid local AiToEarn port conflicts#508
Dimon94 wants to merge 1 commit into
yikart:mainfrom
Dimon94:fix/local-codex-nonconflicting-ports

Conversation

@Dimon94
Copy link
Copy Markdown

@Dimon94 Dimon94 commented May 11, 2026

Summary

  • Adds an isolated local Codex deployment entrypoint and OpenAI-compatible proxy.
  • Parameterizes Docker Compose host ports while preserving existing upstream defaults.
  • Sets non-conflicting local defaults for the Codex startup script: Web 18080, RustFS proxy 19000, RustFS console 19001, MongoDB 27018, Redis 6380.
  • Documents the local port map and updates local API/MCP/SSE examples to http://localhost:18080.

Verification

  • bash -n scripts/start-local-codex.sh
  • node --check scripts/codex-openai-proxy.mjs
  • /Applications/Docker.app/Contents/Resources/bin/docker compose -f docker-compose.yml -f docker-compose.codex.yml config --quiet
  • git diff --check HEAD~1..HEAD
  • Effective compose config confirmed published host ports 18080, 19000, 19001, 27018, 6380, and updated ASSETS_CONFIG / RELAY_CALLBACK_URL values.
  • Codex proxy smoke test: curl -fsS http://127.0.0.1:52032/health returned ok=true, model=gpt-5.5, reasoningEffort=xhigh.

Runtime note

Full container startup could not be completed on the current machine because Docker Desktop is the Intel build on Apple Silicon. Docker Desktop reports: This is the Intel version of Docker Desktop; the docs already call out the need for the arm64 build.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a local development environment for AiToEarn that integrates with a local Codex API. Key additions include a Node.js proxy script to manage OpenAI-compatible requests, a bash script for automated startup, and comprehensive documentation. The Docker Compose configuration has also been updated to support dynamic port mapping and the new proxy. Review feedback identifies several technical improvements for the proxy script, such as fixing potential URL duplication, correctly handling response headers to prevent truncated responses, and ensuring the proxy is accessible from within Docker containers by listening on all interfaces.

return
}

const targetUrl = `${codexBaseUrl}${url.pathname}${url.search}`
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current string concatenation will cause the /v1 prefix to be duplicated if codexBaseUrl already includes it (e.g., http://127.0.0.1:52031/v1 + /v1/chat/completions becomes .../v1/v1/chat/completions). Using the URL constructor handles this correctly by resolving the absolute path against the base URL's origin.

Suggested change
const targetUrl = `${codexBaseUrl}${url.pathname}${url.search}`
const targetUrl = new URL(url.pathname + url.search, codexBaseUrl).toString()

}

const upstream = await fetch(targetUrl, { method: req.method, headers, body })
res.writeHead(upstream.status, Object.fromEntries(upstream.headers.entries()))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Forwarding all upstream headers blindly can cause issues. Specifically, if the proxy modified the request body or if fetch transparently decompressed the response, the content-length header from the upstream will be incorrect for the response sent to the client. This can lead to truncated responses or hanging connections. It's safer to exclude hop-by-hop headers and those related to body encoding/length.

    const responseHeaders = {}
    for (const [name, value] of upstream.headers.entries()) {
      if (!['content-length', 'content-encoding', 'transfer-encoding', 'connection'].includes(name.toLowerCase())) {
        responseHeaders[name] = value
      }
    }
    res.writeHead(upstream.status, responseHeaders)

catch (error) {
responseJson(res, 500, { error: error instanceof Error ? error.message : String(error) })
}
}).listen(port, '127.0.0.1', () => {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Listening on 127.0.0.1 makes the proxy inaccessible from inside Docker containers, even when using host.docker.internal. On most systems (especially macOS/Windows with Docker Desktop), host.docker.internal resolves to an IP that can only reach services listening on all interfaces (0.0.0.0) or the specific bridge interface.

Suggested change
}).listen(port, '127.0.0.1', () => {
}).listen(port, '0.0.0.0', () => {

function readCodexBaseUrl() {
const configPath = join(homedir(), '.codex', 'config.toml')
const config = readFileSync(configPath, 'utf8')
const section = config.match(new RegExp(`\\[model_providers\\.${providerName}\\]([\\s\\S]*?)(?:\\n\\[|$)`))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Parsing TOML with a regular expression is fragile and can break if the configuration file format changes slightly (e.g., adding spaces around brackets, comments on the same line, or using different quoting styles). While acceptable for a simple local script, consider using a proper TOML parser if this configuration becomes more complex.

@Dimon94 Dimon94 closed this May 11, 2026
@Dimon94 Dimon94 deleted the fix/local-codex-nonconflicting-ports branch May 11, 2026 08:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant