IronClaw defaults to NEAR AI for model access, but supports any OpenAI-compatible endpoint as well as Anthropic and Ollama directly. This guide covers the most common configurations.
| Provider | Backend value | Requires API key | Notes |
|---|---|---|---|
| NEAR AI | nearai |
OAuth (browser) | Default; multi-model |
| Anthropic | anthropic |
ANTHROPIC_API_KEY |
Claude models |
| OpenAI | openai |
OPENAI_API_KEY |
GPT models |
| Google Gemini | gemini |
GEMINI_API_KEY |
Gemini models |
| io.net | ionet |
IONET_API_KEY |
Intelligence API |
| Mistral | mistral |
MISTRAL_API_KEY |
Mistral models |
| Yandex AI Studio | yandex |
YANDEX_API_KEY |
YandexGPT models |
| MiniMax | minimax |
MINIMAX_API_KEY |
MiniMax-M2.5 models |
| Cloudflare Workers AI | cloudflare |
CLOUDFLARE_API_KEY |
Access to Workers AI |
| Ollama | ollama |
No | Local inference |
| AWS Bedrock | bedrock |
AWS credentials | Native Converse API |
| OpenRouter | openai_compatible |
LLM_API_KEY |
300+ models |
| Together AI | openai_compatible |
LLM_API_KEY |
Fast inference |
| Fireworks AI | openai_compatible |
LLM_API_KEY |
Fast inference |
| vLLM / LiteLLM | openai_compatible |
Optional | Self-hosted |
| LM Studio | openai_compatible |
No | Local GUI |
No additional configuration required. On first run, ironclaw onboard opens a browser
for OAuth authentication. Credentials are saved to ~/.ironclaw/session.json.
NEARAI_MODEL=claude-3-5-sonnet-20241022
NEARAI_BASE_URL=https://private.near.aiLLM_BACKEND=anthropic
ANTHROPIC_API_KEY=sk-ant-...Popular models: claude-sonnet-4-20250514, claude-3-5-sonnet-20241022, claude-3-5-haiku-20241022
LLM_BACKEND=openai
OPENAI_API_KEY=sk-...Popular models: gpt-4o, gpt-4o-mini, o3-mini
Install Ollama from ollama.com, pull a model, then:
LLM_BACKEND=ollama
OLLAMA_MODEL=llama3.2
# OLLAMA_BASE_URL=http://localhost:11434 # defaultPull a model first: ollama pull llama3.2
MiniMax provides high-performance language models with 204,800 token context windows.
LLM_BACKEND=minimax
MINIMAX_API_KEY=...Available models: MiniMax-M2.5 (default), MiniMax-M2.5-highspeed
To use the China mainland endpoint, set:
MINIMAX_BASE_URL=https://api.minimaxi.com/v1Uses the native AWS Converse API via aws-sdk-bedrockruntime. Supports standard AWS
authentication methods: IAM credentials, SSO profiles, and instance roles.
Build prerequisite: The
aws-lc-syscrate (transitive dependency via AWS SDK) requires CMake to compile. Install it before building with--features bedrock:
- macOS:
brew install cmake- Ubuntu/Debian:
sudo apt install cmake- Fedora:
sudo dnf install cmake
LLM_BACKEND=bedrock
BEDROCK_MODEL=anthropic.claude-opus-4-6-v1
BEDROCK_REGION=us-east-1
BEDROCK_CROSS_REGION=us
# AWS_PROFILE=my-sso-profile # optional, for named profilesThe AWS SDK credential chain automatically resolves credentials from environment
variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY), shared credentials file
(~/.aws/credentials), SSO profiles, and EC2/ECS instance roles.
Set BEDROCK_CROSS_REGION to route requests across AWS regions for capacity:
| Prefix | Routing |
|---|---|
us |
US regions (us-east-1, us-east-2, us-west-2) |
eu |
European regions |
apac |
Asia-Pacific regions |
global |
All commercial AWS regions |
| (unset) | Single-region only |
| Model | ID |
|---|---|
| Claude Opus 4.6 | anthropic.claude-opus-4-6-v1 |
| Claude Sonnet 4.5 | anthropic.claude-sonnet-4-5-20250929-v1:0 |
| Claude Haiku 4.5 | anthropic.claude-haiku-4-5-20251001-v1:0 |
| Amazon Nova Pro | amazon.nova-pro-v1:0 |
| Llama 4 Maverick | meta.llama4-maverick-17b-instruct-v1:0 |
All providers below use LLM_BACKEND=openai_compatible. Set LLM_BASE_URL to the
provider's OpenAI-compatible endpoint and LLM_API_KEY to your API key.
OpenRouter routes to 300+ models from a single API key.
LLM_BACKEND=openai_compatible
LLM_BASE_URL=https://openrouter.ai/api/v1
LLM_API_KEY=sk-or-...
LLM_MODEL=anthropic/claude-sonnet-4Popular OpenRouter model IDs:
| Model | ID |
|---|---|
| Claude Sonnet 4 | anthropic/claude-sonnet-4 |
| GPT-4o | openai/gpt-4o |
| Llama 4 Maverick | meta-llama/llama-4-maverick |
| Gemini 2.0 Flash | google/gemini-2.0-flash-001 |
| Mistral Small | mistralai/mistral-small-3.1-24b-instruct |
Browse all models at openrouter.ai/models.
Together AI provides fast inference for open-source models.
LLM_BACKEND=openai_compatible
LLM_BASE_URL=https://api.together.xyz/v1
LLM_API_KEY=...
LLM_MODEL=meta-llama/Llama-3.3-70B-Instruct-TurboPopular Together AI model IDs:
| Model | ID |
|---|---|
| Llama 3.3 70B | meta-llama/Llama-3.3-70B-Instruct-Turbo |
| DeepSeek R1 | deepseek-ai/DeepSeek-R1 |
| Qwen 2.5 72B | Qwen/Qwen2.5-72B-Instruct-Turbo |
Fireworks AI offers fast inference with compound AI system support.
LLM_BACKEND=openai_compatible
LLM_BASE_URL=https://api.fireworks.ai/inference/v1
LLM_API_KEY=fw_...
LLM_MODEL=accounts/fireworks/models/llama4-maverick-instruct-basicFor self-hosted inference servers:
LLM_BACKEND=openai_compatible
LLM_BASE_URL=http://localhost:8000/v1
LLM_API_KEY=token-abc123 # set to any string if auth is not configured
LLM_MODEL=meta-llama/Llama-3.1-8B-InstructLiteLLM proxy (forwards to any backend, including Bedrock, Vertex, Azure):
LLM_BACKEND=openai_compatible
LLM_BASE_URL=http://localhost:4000/v1
LLM_API_KEY=sk-...
LLM_MODEL=gpt-4o # as configured in litellm config.yamlStart LM Studio's local server, then:
LLM_BACKEND=openai_compatible
LLM_BASE_URL=http://localhost:1234/v1
LLM_MODEL=llama-3.2-3b-instruct-q4_K_M
# LLM_API_KEY is not required for LM StudioInstead of editing .env manually, run the onboarding wizard:
ironclaw onboardSelect "OpenAI-compatible" for OpenRouter, Together AI, Fireworks, vLLM, LiteLLM, or LM Studio. You will be prompted for the base URL and (optionally) an API key. The model name is configured in the following step.