Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
6aaa890
fix(security): default webhook server to loopback when tunnel is conf…
ilblackdragon Mar 15, 2026
df8bb07
fix conflict (#1190)
reidliu41 Mar 15, 2026
3f874e7
fix(feishu): resolve compilation errors in Feishu/Lark WASM channel …
reidliu41 Mar 15, 2026
bde0b77
fix(security): prevent metadata spoofing of internal job monitor flag…
ilblackdragon Mar 15, 2026
57c397b
docs: mention MiniMax as built-in provider in all READMEs (#1209)
octo-patch Mar 15, 2026
e81fb7e
refactor(setup): extract init logic from wizard into owning modules (…
ilblackdragon Mar 16, 2026
81724ca
fix: Telegram bot token validation fails intermittently (HTTP 404) (#…
nickpismenkov Mar 16, 2026
1b59eb6
feat: Reuse Codex CLI OAuth tokens for ChatGPT backend LLM calls (#693)
ZeroTrust01 Mar 16, 2026
3e0e35d
docs(extensions): document relay manager init order (#928)
G7CNF Mar 16, 2026
f618166
feat(heartbeat): fire_at time-of-day scheduling with IANA timezone (#…
nick-stebbings Mar 16, 2026
58a3eb1
fix(worker): prevent orphaned tool_results and fix parallel merging (…
zmanian Mar 16, 2026
9e41b8a
fix(llm): persist refreshed Anthropic OAuth token after Keychain re-r…
zmanian Mar 16, 2026
596d17f
fix(jobs): make completed->completed transition idempotent to prevent…
zmanian Mar 16, 2026
0c31da4
feat(sandbox): add retry logic for transient container failures (#1232)
zmanian Mar 16, 2026
a357972
feat(config): unify config resolution with Settings fallback (Phase 2…
reidliu41 Mar 16, 2026
946c040
feat(telegram): add forum topic support with thread routing (#1199)
arein Mar 16, 2026
fe53f69
chore: promote staging to staging-promote/57c397bd-23120362128 (2026-…
ironclaw-ci[bot] Mar 16, 2026
63a2355
feat: verify telegram owner during hot activation (#1157)
henrypark133 Mar 16, 2026
e212c00
Merge pull request #1246 from nearai/staging-promote/63a23550-2315134…
henrypark133 Mar 16, 2026
8ba8def
Merge pull request #1239 from nearai/staging-promote/946c040f-2313422…
henrypark133 Mar 16, 2026
409a2ab
Merge pull request #1231 from nearai/staging-promote/57c397bd-2312036…
henrypark133 Mar 16, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,11 @@ DATABASE_POOL_SIZE=10

# === OpenAI Direct ===
# OPENAI_API_KEY=sk-...
# Reuse Codex CLI auth.json instead of setting OPENAI_API_KEY manually.
# Works with both OpenAI API-key mode and Codex ChatGPT OAuth mode.
# In ChatGPT mode this uses the private `chatgpt.com/backend-api/codex` endpoint.
# LLM_USE_CODEX_AUTH=true
# CODEX_AUTH_PATH=~/.codex/auth.json

# === NEAR AI (Chat Completions API) ===
# Two auth modes:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/e2e.yml
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ jobs:
- group: features
files: "tests/e2e/scenarios/test_skills.py tests/e2e/scenarios/test_tool_approval.py"
- group: extensions
files: "tests/e2e/scenarios/test_extensions.py tests/e2e/scenarios/test_extension_oauth.py tests/e2e/scenarios/test_wasm_lifecycle.py tests/e2e/scenarios/test_tool_execution.py tests/e2e/scenarios/test_pairing.py tests/e2e/scenarios/test_oauth_credential_fallback.py tests/e2e/scenarios/test_routine_oauth_credential_injection.py"
files: "tests/e2e/scenarios/test_extensions.py tests/e2e/scenarios/test_extension_oauth.py tests/e2e/scenarios/test_telegram_token_validation.py tests/e2e/scenarios/test_wasm_lifecycle.py tests/e2e/scenarios/test_tool_execution.py tests/e2e/scenarios/test_pairing.py tests/e2e/scenarios/test_oauth_credential_fallback.py tests/e2e/scenarios/test_routine_oauth_credential_injection.py"
steps:
- uses: actions/checkout@v6

Expand Down
6 changes: 6 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -33,3 +33,9 @@ trace_*.json
# Local Claude Code settings (machine-specific, should not be committed)
.claude/settings.local.json
.worktrees/

# Python cache
__pycache__/
*.pyc
*.pyo
*.pyd
9 changes: 5 additions & 4 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ eula = false
tokio = { version = "1", features = ["full"] }
tokio-stream = { version = "0.1", features = ["sync"] }
futures = "0.3"
eventsource-stream = "0.2"

# HTTP client
reqwest = { version = "0.12", default-features = false, features = ["json", "multipart", "rustls-tls-native-roots", "stream"] }
Expand Down
2 changes: 1 addition & 1 deletion FEATURE_PARITY.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ This document tracks feature parity between IronClaw (Rust implementation) and O
| REPL (simple) | ✅ | ✅ | - | For testing |
| WASM channels | ❌ | ✅ | - | IronClaw innovation |
| WhatsApp | ✅ | ❌ | P1 | Baileys (Web), same-phone mode with echo detection |
| Telegram | ✅ | ✅ | - | WASM channel(MTProto), DM pairing, caption, /start, bot_username, DM topics |
| Telegram | ✅ | ✅ | - | WASM channel(MTProto), DM pairing, caption, /start, bot_username, DM topics, setup-time owner verification |
| Discord | ✅ | ❌ | P2 | discord.js, thread parent binding inheritance |
| Signal | ✅ | ✅ | P2 | signal-cli daemonPC, SSE listener HTTP/JSON-R, user/group allowlists, DM pairing |
| Slack | ✅ | ✅ | - | WASM tool |
Expand Down
15 changes: 11 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,13 +166,20 @@ written to `~/.ironclaw/.env` so they are available before the database connects

### Alternative LLM Providers

IronClaw defaults to NEAR AI but works with any OpenAI-compatible endpoint.
Popular options include **OpenRouter** (300+ models), **Together AI**, **Fireworks AI**,
**Ollama** (local), and self-hosted servers like **vLLM** or **LiteLLM**.
IronClaw defaults to NEAR AI but supports many LLM providers out of the box.
Built-in providers include **Anthropic**, **OpenAI**, **Google Gemini**, **MiniMax**,
**Mistral**, and **Ollama** (local). OpenAI-compatible services like **OpenRouter**
(300+ models), **Together AI**, **Fireworks AI**, and self-hosted servers (**vLLM**,
**LiteLLM**) are also supported.

Select *"OpenAI-compatible"* in the wizard, or set environment variables directly:
Select your provider in the wizard, or set environment variables directly:

```env
# Example: MiniMax (built-in, 204K context)
LLM_BACKEND=minimax
MINIMAX_API_KEY=...

# Example: OpenAI-compatible endpoint
LLM_BACKEND=openai_compatible
LLM_BASE_URL=https://openrouter.ai/api/v1
LLM_API_KEY=sk-or-...
Expand Down
14 changes: 11 additions & 3 deletions README.ru.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,12 +163,20 @@ ironclaw onboard

### Альтернативные LLM-провайдеры

IronClaw по умолчанию использует NEAR AI, но работает с любыми OpenAI-совместимыми эндпоинтами.
Популярные варианты включают **OpenRouter** (300+ моделей), **Together AI**, **Fireworks AI**, **Ollama** (локально) и собственные серверы, такие как **vLLM** или **LiteLLM**.
IronClaw по умолчанию использует NEAR AI, но поддерживает множество LLM-провайдеров из коробки.
Встроенные провайдеры включают **Anthropic**, **OpenAI**, **Google Gemini**, **MiniMax**,
**Mistral** и **Ollama** (локально). Также поддерживаются OpenAI-совместимые сервисы:
**OpenRouter** (300+ моделей), **Together AI**, **Fireworks AI** и собственные серверы
(**vLLM**, **LiteLLM**).

Выберите *"OpenAI-compatible"* в мастере настройки или установите переменные окружения напрямую:
Выберите провайдера в мастере настройки или установите переменные окружения напрямую:

```env
# Пример: MiniMax (встроенный, контекст 204K)
LLM_BACKEND=minimax
MINIMAX_API_KEY=...

# Пример: OpenAI-совместимый эндпоинт
LLM_BACKEND=openai_compatible
LLM_BASE_URL=https://openrouter.ai/api/v1
LLM_API_KEY=sk-or-...
Expand Down
11 changes: 8 additions & 3 deletions README.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,12 +163,17 @@ ironclaw onboard

### 替代 LLM 提供商

IronClaw 默认使用 NEAR AI,但兼容任何 OpenAI 兼容的端点
常用选项包括 **OpenRouter**(300+ 模型)、**Together AI**、**Fireworks AI**、**Ollama**(本地部署)以及自托管服务器如 **vLLM****LiteLLM**。
IronClaw 默认使用 NEAR AI,但开箱即用地支持多种 LLM 提供商
内置提供商包括 **Anthropic**、**OpenAI**、**Google Gemini**、**MiniMax**、**Mistral** 和 **Ollama**(本地部署)。同时也支持 OpenAI 兼容服务,如 **OpenRouter**(300+ 模型)、**Together AI**、**Fireworks AI** 以及自托管服务器(**vLLM****LiteLLM**

在向导中选择 *"OpenAI-compatible"*,或直接设置环境变量:
在向导中选择你的提供商,或直接设置环境变量:

```env
# 示例:MiniMax(内置,204K 上下文)
LLM_BACKEND=minimax
MINIMAX_API_KEY=...

# 示例:OpenAI 兼容端点
LLM_BACKEND=openai_compatible
LLM_BASE_URL=https://openrouter.ai/api/v1
LLM_API_KEY=sk-or-...
Expand Down
Loading
Loading