Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ dist-ssr
dist-electron
release
*.local
.local/
.vscode
# Editor directories and files
.vscode/.debug.env
Expand Down
215 changes: 215 additions & 0 deletions LOCAL_CODEX_DEPLOYMENT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,215 @@
# AiToEarn 本地 Codex 接入与使用说明

这份文档给接手本目录的人使用:目标是在本机运行 AiToEarn,并让 AiToEarn 的 AI 聊天能力走当前用户系统里的 Codex API 服务,模型固定为 `gpt-5.5`,推理强度固定为 `xhigh`。

## 适用场景

- 想在本机完整使用 AiToEarn Web 控制台。
- 想让 AiToEarn 调用本机 Codex 的 OpenAI 兼容 API,而不是在仓库里保存真实 OpenAI Key。
- 想把本机 AiToEarn 暴露给浏览器、MCP 客户端或其他内部服务调用。

## 组件关系

```text
Browser / MCP client / local API caller
|
| http://localhost:18080
v
Nginx in Docker Compose
|
| /api/ai/* -> aitoearn-ai
| /api/* -> aitoearn-server
v
AiToEarn services
|
| OPENAI_BASE_URL=http://host.docker.internal:52032/v1
v
scripts/codex-openai-proxy.mjs
|
| reads ~/.codex/config.toml and ~/.codex/auth.json
v
Local Codex API service
|
| model=gpt-5.5, reasoning_effort=xhigh
v
OpenAI-compatible response
```

仓库不会保存真实 Codex API Key。代理启动时从当前用户的 `~/.codex/auth.json` 读取 Key,从 `~/.codex/config.toml` 读取 `codex_local_access` 的 `base_url`。

## 前置条件

- Node.js 可用。本机已验证 Node `v22.22.0` 可运行代理脚本。
- 当前用户已经配置好 Codex API Key 模式,且 `~/.codex/config.toml` 中存在 `model_providers.codex_local_access`。
- 容器运行时可用。推荐 Docker Compose,因为原项目已经提供完整编排。
- Apple Silicon 机器必须安装 arm64 版 Docker Desktop。Intel 版 Docker Desktop 会启动失败。

## 本地默认端口

`scripts/start-local-codex.sh` 会导出一组不冲突的宿主机端口。容器内部端口保持原项目默认值不变,只改变宿主机映射。

| 服务 | 容器内部端口 | 本地默认宿主机端口 | 可覆盖环境变量 |
| --- | --- | --- | --- |
| Web / Nginx | `80` | `18080` | `AITOEARN_HTTP_PORT` |
| RustFS S3 proxy | `9000` | `19000` | `AITOEARN_RUSTFS_PORT` |
| RustFS console | `9001` | `19001` | `AITOEARN_RUSTFS_CONSOLE_PORT` |
| MongoDB | `27017` | `27018` | `AITOEARN_MONGODB_PORT` |
| Redis | `6379` | `6380` | `AITOEARN_REDIS_PORT` |
| Codex OpenAI proxy | `52032` | `52032` | `CODEX_OPENAI_PROXY_PORT` |

如需覆盖端口,在启动命令前传入对应环境变量即可,例如:

```bash
AITOEARN_HTTP_PORT=28080 ./scripts/start-local-codex.sh
```

## 一条命令启动

在仓库根目录运行:

```bash
./scripts/start-local-codex.sh
```

脚本会做这些事:

- 启动 Docker Desktop;如果已安装 Colima,则优先启动 Colima。
- 启动本机 Codex OpenAI 兼容代理:`http://127.0.0.1:52032/v1`。
- 用 `docker-compose.yml` 加 `docker-compose.codex.yml` 启动 AiToEarn。
- 让容器里的 `aitoearn-ai` 和 `aitoearn-server` 使用 `OPENAI_BASE_URL=http://host.docker.internal:52032/v1`。

启动完成后打开:

```text
http://localhost:18080
```

## 给别人怎么使用

普通用户直接打开 Web:

```text
http://localhost:18080
```

AI 助手或 MCP 客户端连接本机 AiToEarn:

```text
http://localhost:18080/api/unified/mcp
```

SSE 连接地址:

```text
http://localhost:18080/api/unified/sse
```

认证方式沿用 AiToEarn 的 API Key 机制。启动后在 Web 里进入设置创建 API Key,然后客户端请求带上:

```text
x-api-key: <AiToEarn API Key>
```

AI 聊天模型列表接口:

```bash
curl http://localhost:18080/api/ai/models/chat
```

这个接口是公开接口,能用来确认 `gpt-5.5` 已经出现在 AiToEarn 可选模型中。真正调用 `/api/ai/chat` 或 `/api/ai/chat/stream` 需要登录态或用户 token,普通使用建议从 Web 界面发起。

## Codex 代理单独验证

只验证 Codex 接入,不启动 Docker:

```bash
node scripts/codex-openai-proxy.mjs
```

另一个终端执行:

```bash
curl -s http://127.0.0.1:52032/health
```

预期能看到代理配置,例如:

```json
{"ok":true,"upstream":"http://127.0.0.1:52031/v1","model":"gpt-5.5","reasoningEffort":"xhigh"}
```

聊天冒烟测试:

```bash
curl -s http://127.0.0.1:52032/v1/chat/completions \
-H 'content-type: application/json' \
-H 'authorization: Bearer dummy' \
-d '{"model":"anything","messages":[{"role":"user","content":"Reply with exactly OK"}],"max_tokens":20}'
```

这里传入的 `model` 会被代理强制改成 `gpt-5.5`,并补上 `reasoning_effort: xhigh`。

## 可调环境变量

```bash
CODEX_OPENAI_PROXY_PORT=52032
CODEX_OPENAI_MODEL=gpt-5.5
CODEX_OPENAI_REASONING_EFFORT=xhigh
CODEX_MODEL_PROVIDER=codex_local_access
CODEX_OPENAI_FORCE_MODEL=1
```

常用变体:

- `CODEX_OPENAI_FORCE_MODEL=0`:不强制覆盖上游传入的模型,只在模型为空时使用 `gpt-5.5`。
- `CODEX_OPENAI_HOST=host.lima.internal`:使用 Colima 时让容器访问宿主机代理。
- `DOCKER_BIN=/path/to/docker`:指定 Docker CLI 路径。

Colima 示例:

```bash
CODEX_OPENAI_HOST=host.lima.internal ./scripts/start-local-codex.sh
```

## 常见问题

`Docker Desktop is not running or not ready.`

检查 Docker daemon 是否能工作:

```bash
docker info
```

如果是 Apple Silicon 且日志提示 `This is the Intel version of Docker Desktop`,需要安装 arm64 版 Docker Desktop 后再运行启动脚本。

`Codex OpenAI proxy did not become healthy.`

检查日志:

```bash
tail -80 .local/codex-openai-proxy.log
```

常见原因是 `~/.codex/auth.json` 没有 `OPENAI_API_KEY`,或者 `~/.codex/config.toml` 中没有 `model_providers.codex_local_access.base_url`。

AI 功能报模型不可用。

先确认模型列表:

```bash
curl http://localhost:18080/api/ai/models/chat
```

应能看到 `gpt-5.5`,显示名为 `GPT-5.5 Codex xhigh`。如果没有,重启 AI 服务:

```bash
docker compose -f docker-compose.yml -f docker-compose.codex.yml restart aitoearn-ai
```

## 相关文件

- `scripts/start-local-codex.sh`:本地一键启动入口。
- `scripts/codex-openai-proxy.mjs`:Codex 到 OpenAI 兼容接口的本机代理。
- `docker-compose.codex.yml`:Docker Compose 覆盖文件,负责把 AiToEarn AI 请求指到本机代理。
- `project/aitoearn-backend/apps/aitoearn-ai/config/config.js`:AiToEarn AI 模型配置,已加入 `gpt-5.5`。
11 changes: 11 additions & 0 deletions docker-compose.codex.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
services:
aitoearn-ai:
environment:
OPENAI_BASE_URL: http://${CODEX_OPENAI_HOST:-host.docker.internal}:${CODEX_OPENAI_PROXY_PORT:-52032}/v1
OPENAI_API_KEY: codex-local-proxy
OPENAI_CHAT_MODEL: gpt-5.5

aitoearn-server:
environment:
OPENAI_BASE_URL: http://${CODEX_OPENAI_HOST:-host.docker.internal}:${CODEX_OPENAI_PROXY_PORT:-52032}/v1
OPENAI_API_KEY: codex-local-proxy
16 changes: 8 additions & 8 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ services:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: password
ports:
- "27017:27017"
- "${AITOEARN_MONGODB_PORT:-27017}:27017"
volumes:
- mongodb-data:/data/db
- mongodb-config:/data/configdb
Expand Down Expand Up @@ -38,7 +38,7 @@ services:
restart: unless-stopped
command: redis-server --requirepass password
ports:
- "6379:6379"
- "${AITOEARN_REDIS_PORT:-6379}:6379"
volumes:
- redis-data:/data
networks:
Expand All @@ -59,7 +59,7 @@ services:
RUSTFS_ACCESS_KEY: rustfsadmin
RUSTFS_SECRET_KEY: rustfsadmin
ports:
- "9001:9001"
- "${AITOEARN_RUSTFS_CONSOLE_PORT:-9001}:9001"
volumes:
- rustfs-data:/data
networks:
Expand Down Expand Up @@ -142,7 +142,7 @@ services:

SERVER_URL: http://aitoearn-server:3002

ASSETS_CONFIG: '{"provider":"s3","region":"us-east-1","bucketName":"aitoearn","endpoint":"http://rustfs.local:9000","publicEndpoint":"http://127.0.0.1:9000","cdnEndpoint":"http://127.0.0.1:8080/oss","accessKeyId":"rustfsadmin","secretAccessKey":"rustfsadmin","forcePathStyle":true}'
ASSETS_CONFIG: "{\"provider\":\"s3\",\"region\":\"us-east-1\",\"bucketName\":\"aitoearn\",\"endpoint\":\"http://rustfs.local:9000\",\"publicEndpoint\":\"http://127.0.0.1:${AITOEARN_RUSTFS_PORT:-9000}\",\"cdnEndpoint\":\"http://127.0.0.1:${AITOEARN_HTTP_PORT:-8080}/oss\",\"accessKeyId\":\"rustfsadmin\",\"secretAccessKey\":\"rustfsadmin\",\"forcePathStyle\":true}"

# AI Services
OPENAI_API_KEY: sk-placeholder
Expand Down Expand Up @@ -201,7 +201,7 @@ services:
JWT_SECRET: change-this-jwt-secret
INTERNAL_TOKEN: change-this-secret-token

ASSETS_CONFIG: '{"provider":"s3","region":"us-east-1","bucketName":"aitoearn","endpoint":"http://rustfs.local:9000","publicEndpoint":"http://127.0.0.1:9000","cdnEndpoint":"http://127.0.0.1:8080/oss","accessKeyId":"rustfsadmin","secretAccessKey":"rustfsadmin","forcePathStyle":true}'
ASSETS_CONFIG: "{\"provider\":\"s3\",\"region\":\"us-east-1\",\"bucketName\":\"aitoearn\",\"endpoint\":\"http://rustfs.local:9000\",\"publicEndpoint\":\"http://127.0.0.1:${AITOEARN_RUSTFS_PORT:-9000}\",\"cdnEndpoint\":\"http://127.0.0.1:${AITOEARN_HTTP_PORT:-8080}/oss\",\"accessKeyId\":\"rustfsadmin\",\"secretAccessKey\":\"rustfsadmin\",\"forcePathStyle\":true}"

MAIL_USER: ""
MAIL_PASS: ""
Expand Down Expand Up @@ -245,7 +245,7 @@ services:
# Relay (optional)
RELAY_SERVER_URL: https://aitoearn.ai/api
RELAY_API_KEY: ""
RELAY_CALLBACK_URL: http://127.0.0.1:8080/api/plat/relay-callback
RELAY_CALLBACK_URL: http://127.0.0.1:${AITOEARN_HTTP_PORT:-8080}/api/plat/relay-callback
healthcheck:
test: ["CMD", "node", "-e", "require('http').get('http://localhost:3002/health', (r) => { process.exit(r.statusCode === 200 ? 0 : 1) })"]
interval: 30s
Expand Down Expand Up @@ -282,8 +282,8 @@ services:
aitoearn-ai:
condition: service_healthy
ports:
- "8080:80"
- "9000:9000"
- "${AITOEARN_HTTP_PORT:-8080}:80"
- "${AITOEARN_RUSTFS_PORT:-9000}:9000"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
Expand Down
15 changes: 15 additions & 0 deletions project/aitoearn-backend/apps/aitoearn-ai/config/config.js
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ const {
VOLCENGINE_VOD_SPACE_NAME,
OPENAI_API_KEY,
OPENAI_BASE_URL,
OPENAI_CHAT_MODEL = 'gpt-5.5',
ANTHROPIC_BASE_URL,
ANTHROPIC_API_KEY,
GROK_API_KEY,
Expand Down Expand Up @@ -150,6 +151,20 @@ module.exports = {
},
models: {
chat: [
{
name: OPENAI_CHAT_MODEL,
description: 'GPT-5.5 Codex xhigh',
inputModalities: ['text', 'image'],
outputModalities: ['text'],
pricing: {
tiers: [
{
input: { text: '0', image: '0' },
output: { text: '0' },
},
],
},
},
{
name: 'gemini-3.1-pro-preview',
description: 'Gemini 3.1 Pro Preview',
Expand Down
Loading