| title | Custom model providers |
|---|---|
| sidebarTitle | Model providers |
| description | Configure any LangChain-compatible model provider for the Deep Agents CLI |
The Deep Agents CLI supports any chat model provider compatible with LangChain, unlocking use for virtually any LLM that supports tool calling. Any service that exposes an OpenAI-compatible or Anthropic-compatible API also works out of the box — see Compatible APIs.
The CLI integrates automatically with the following model providers — no extra configuration needed beyond installing the relevant provider package.
-
Install provider packages
Each model provider requires installing its corresponding LangChain integration package. These are available as optional extras when installing the CLI:
# Quick install with chosen providers (OpenAI included automatically) DEEPAGENTS_EXTRAS="anthropic,groq" curl -LsSf https://raw.githubusercontent.com/langchain-ai/deepagents/refs/heads/main/libs/cli/scripts/install.sh | bash # Or install directly with uv uv tool install 'deepagents-cli[anthropic,groq]' # Add additional packages at a later date uv tool upgrade deepagents-cli --with langchain-ollama # All providers uv tool install 'deepagents-cli[anthropic,bedrock,cohere,deepseek,fireworks,google-genai,groq,huggingface,ibm,mistralai,nvidia,ollama,openai,openrouter,perplexity,vertexai,xai]'
-
Set credentials
Most providers require an API key. Set the appropriate environment variable listed in the table below. Some providers use other credentials (for example, Vertex AI uses
GOOGLE_CLOUD_PROJECTplus ADC). Refer to each integration package's docs for details.
Using a provider not listed here? See Arbitrary providers — any LangChain-compatible provider can be used in the CLI with some additional setup.
:::python
| Provider | Package | Credential env var | Model profiles |
|---|---|---|---|
| OpenAI | langchain-openai |
OPENAI_API_KEY |
✅ |
| Azure OpenAI | langchain-openai |
AZURE_OPENAI_API_KEY |
✅ |
| Anthropic | langchain-anthropic |
ANTHROPIC_API_KEY |
✅ |
| Google Gemini API | langchain-google-genai |
GOOGLE_API_KEY |
✅ |
| Google Vertex AI | langchain-google-vertexai |
GOOGLE_CLOUD_PROJECT |
✅ |
| AWS Bedrock | langchain-aws |
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY |
✅ |
| AWS Bedrock Converse | langchain-aws |
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY |
✅ |
| Hugging Face | langchain-huggingface |
HUGGINGFACEHUB_API_TOKEN |
✅ |
| Ollama | langchain-ollama |
Optional | ❌ |
| Groq | langchain-groq |
GROQ_API_KEY |
✅ |
| Cohere | langchain-cohere |
COHERE_API_KEY |
❌ |
| Fireworks | langchain-fireworks |
FIREWORKS_API_KEY |
✅ |
| Together | langchain-together |
TOGETHER_API_KEY |
❌ |
| Mistral AI | langchain-mistralai |
MISTRAL_API_KEY |
✅ |
| DeepSeek | langchain-deepseek |
DEEPSEEK_API_KEY |
✅ |
| IBM (watsonx.ai) | langchain-ibm |
WATSONX_APIKEY |
❌ |
| Nvidia | langchain-nvidia-ai-endpoints |
NVIDIA_API_KEY |
❌ |
| xAI | langchain-xai |
XAI_API_KEY |
✅ |
| Perplexity | langchain-perplexity |
PPLX_API_KEY |
✅ |
| OpenRouter | langchain-openrouter |
OPENROUTER_API_KEY |
✅ |
:::
:::js
| Provider | Package | Credential env var | Model profiles |
|---|---|---|---|
| OpenAI | langchain-openai |
OPENAI_API_KEY |
✅ |
| Anthropic | langchain-anthropic |
ANTHROPIC_API_KEY |
✅ |
| Google Gemini API | langchain-google-genai |
GOOGLE_API_KEY |
✅ |
| Google Vertex AI | langchain-google-vertexai |
GOOGLE_CLOUD_PROJECT |
✅ |
| AWS Bedrock | langchain-aws |
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY |
✅ |
| AWS Bedrock Converse | langchain-aws |
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY |
✅ |
| Ollama | langchain-ollama |
Optional | ❌ |
| Groq | langchain-groq |
GROQ_API_KEY |
✅ |
| Cohere | langchain-cohere |
COHERE_API_KEY |
❌ |
| Fireworks | langchain-fireworks |
FIREWORKS_API_KEY |
✅ |
| DeepSeek | langchain-deepseek |
DEEPSEEK_API_KEY |
✅ |
| xAI | langchain-xai |
XAI_API_KEY |
✅ |
| Perplexity | langchain-perplexity |
PPLX_API_KEY |
✅ |
| ::: |
To switch models in the CLI, either:
-
Use the interactive model switcher with the
These profiles are not an exhaustive list of available models. If the model you want isn't shown, use option 2 instead (useful for newly released models that haven't been added to the profiles yet). See [Which models appear in the switcher](#which-models-appear-in-the-switcher) for the full set of criteria./modelcommand. This displays available models sourced from each installed LangChain provider package's model profiles. -
Specify a model name directly as an argument, e.g.
/model openai:gpt-4o. You can use any model supported by the chosen provider, regardless of whether it appears in the list from option 1. The model name will be passed to the API request. -
Specify the model at launch via
--model, e.g.deepagents --model openai:gpt-4o
The interactive /model selector builds its list dynamically — it is not a hardcoded list baked into the CLI. A model appears in the switcher when all of the following are true:
- The provider package is installed. Each provider (e.g.
langchain-anthropic,langchain-openai) must be installed alongsidedeepagents-cli— either as an install extra (e.g.uv tool install 'deepagents-cli[anthropic]') or added later withuv tool upgrade deepagents-cli --with <package>. If a package is missing, its entire provider section is absent from the switcher. - The model has a profile with
tool_callingenabled. The CLI is a tool-calling agent, so it filters out models that don't support tool calling, as indicated in their profile data. This is the most common reason a model is missing from the list. - The model accepts and produces text. Models whose profile explicitly sets
text_inputsortext_outputstofalse(e.g. embedding or image-generation models) are excluded.
Models defined in config.toml under [models.providers.<name>].models bypass the profile filter — they always appear in the switcher regardless of profile metadata. This is the recommended way to add models that are missing from the list.
| Symptom | Likely cause | Fix |
|---|---|---|
| Entire provider missing from switcher | Provider package not installed | Install the package (e.g. uv tool upgrade deepagents-cli --with langchain-groq) |
| Provider shown but specific model missing | Model profile has tool_calling: false or no profile exists |
Add the model to [models.providers.<name>].models in config.toml, or use /model <provider>:<model> directly |
| Provider shows ⚠ "missing credentials" | API key env var not set | Set the credential env var from the Provider reference table |
| Provider shows ? "credentials unknown" | Provider uses non-standard auth that the CLI can't verify | Credentials may still work — try switching to the model. If auth fails, check the provider's docs |
You can set a persistent default model that will be used for all future CLI launches:
-
Via model selector: Open
/model, navigate to the desired model, and pressCtrl+Sto pin it as the default. PressingCtrl+Sagain on the current default clears it. -
Via command:
/model --default provider:model(e.g.,/model --default anthropic:claude-opus-4-6) -
Via config file: Set
[models].defaultin~/.deepagents/config.toml(see Configuration). -
From the shell:
deepagents --default-model anthropic:claude-opus-4-6
To view the current default:
deepagents --default-modelTo clear the default:
-
From the shell:
deepagents --clear-default-model
-
Via command:
/model --default --clear -
Via model selector: Press
Ctrl+Son the currently pinned default model.
Without a default, the CLI will default to the most recently used model.
When the CLI launches, it resolves which model to use in the following order:
--modelflag always wins when provided.[models].defaultin~/.deepagents/config.toml— The user's intentional long-term preference.[models].recentin~/.deepagents/config.toml— The last model switched to via/model. Written automatically; never overwrites[models].default.- Environment auto-detection — Falls back to the first available startup credential, checked in order:
OPENAI_API_KEY,ANTHROPIC_API_KEY,GOOGLE_API_KEY,GOOGLE_CLOUD_PROJECT(Vertex AI).
This startup fallback intentionally checks only those four credentials. Other supported providers (for example, Groq) are still available via --model, /model, and saved defaults ([models].default / [models].recent).
Model routers like OpenRouter and LiteLLM provide access to models from multiple providers through a single endpoint.
Use the dedicated integration packages for these services:
:::python
| Router | Package | Config |
|---|---|---|
| OpenRouter | langchain-openrouter |
openrouter:<model> (built-in, see Provider reference) |
| ::: |
:::js
| Router | Package |
|---|---|
| OpenRouter | langchain-openrouter |
| ::: |
OpenRouter is a built-in provider — install the package and use it directly:
uv tool install 'deepagents-cli[openrouter]'For detailed configuration of provider params, profile overrides, custom base URLs, compatible APIs, arbitrary providers, and lifecycle hooks, see Configuration.