Skip to content

Latest commit

 

History

History
197 lines (143 loc) · 12.4 KB

File metadata and controls

197 lines (143 loc) · 12.4 KB
title Custom model providers
sidebarTitle Model providers
description Configure any LangChain-compatible model provider for the Deep Agents CLI

The Deep Agents CLI supports any chat model provider compatible with LangChain, unlocking use for virtually any LLM that supports tool calling. Any service that exposes an OpenAI-compatible or Anthropic-compatible API also works out of the box — see Compatible APIs.

Quick start

The CLI integrates automatically with the following model providers — no extra configuration needed beyond installing the relevant provider package.

  1. Install provider packages

    Each model provider requires installing its corresponding LangChain integration package. These are available as optional extras when installing the CLI:

    # Quick install with chosen providers (OpenAI included automatically)
    DEEPAGENTS_EXTRAS="anthropic,groq" curl -LsSf https://raw.githubusercontent.com/langchain-ai/deepagents/refs/heads/main/libs/cli/scripts/install.sh | bash
    
    # Or install directly with uv
    uv tool install 'deepagents-cli[anthropic,groq]'
    
    # Add additional packages at a later date
    uv tool upgrade deepagents-cli --with langchain-ollama
    
    # All providers
    uv tool install 'deepagents-cli[anthropic,bedrock,cohere,deepseek,fireworks,google-genai,groq,huggingface,ibm,mistralai,nvidia,ollama,openai,openrouter,perplexity,vertexai,xai]'
  2. Set credentials

    Most providers require an API key. Set the appropriate environment variable listed in the table below. Some providers use other credentials (for example, Vertex AI uses GOOGLE_CLOUD_PROJECT plus ADC). Refer to each integration package's docs for details.

Provider reference

Using a provider not listed here? See Arbitrary providers — any LangChain-compatible provider can be used in the CLI with some additional setup.

:::python

Provider Package Credential env var Model profiles
OpenAI langchain-openai OPENAI_API_KEY
Azure OpenAI langchain-openai AZURE_OPENAI_API_KEY
Anthropic langchain-anthropic ANTHROPIC_API_KEY
Google Gemini API langchain-google-genai GOOGLE_API_KEY
Google Vertex AI langchain-google-vertexai GOOGLE_CLOUD_PROJECT
AWS Bedrock langchain-aws AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
AWS Bedrock Converse langchain-aws AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
Hugging Face langchain-huggingface HUGGINGFACEHUB_API_TOKEN
Ollama langchain-ollama Optional
Groq langchain-groq GROQ_API_KEY
Cohere langchain-cohere COHERE_API_KEY
Fireworks langchain-fireworks FIREWORKS_API_KEY
Together langchain-together TOGETHER_API_KEY
Mistral AI langchain-mistralai MISTRAL_API_KEY
DeepSeek langchain-deepseek DEEPSEEK_API_KEY
IBM (watsonx.ai) langchain-ibm WATSONX_APIKEY
Nvidia langchain-nvidia-ai-endpoints NVIDIA_API_KEY
xAI langchain-xai XAI_API_KEY
Perplexity langchain-perplexity PPLX_API_KEY
OpenRouter langchain-openrouter OPENROUTER_API_KEY

:::

:::js

Provider Package Credential env var Model profiles
OpenAI langchain-openai OPENAI_API_KEY
Anthropic langchain-anthropic ANTHROPIC_API_KEY
Google Gemini API langchain-google-genai GOOGLE_API_KEY
Google Vertex AI langchain-google-vertexai GOOGLE_CLOUD_PROJECT
AWS Bedrock langchain-aws AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
AWS Bedrock Converse langchain-aws AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
Ollama langchain-ollama Optional
Groq langchain-groq GROQ_API_KEY
Cohere langchain-cohere COHERE_API_KEY
Fireworks langchain-fireworks FIREWORKS_API_KEY
DeepSeek langchain-deepseek DEEPSEEK_API_KEY
xAI langchain-xai XAI_API_KEY
Perplexity langchain-perplexity PPLX_API_KEY
:::
A **[model profile](/oss/langchain/models#model-profiles)** is a bundle of metadata (model name, default parameters, capabilities, etc.) that ships with a provider package, largely powered by the [models.dev](https://models.dev/) project. Providers that include model profiles have their models listed automatically in the interactive `/model` switcher, subject to the [filtering criteria](#which-models-appear-in-the-switcher) (notably, `tool_calling` must be enabled). Providers without model profiles require you to specify the model name directly or add models via `config.toml`.

Switching models

To switch models in the CLI, either:

  1. Use the interactive model switcher with the /model command. This displays available models sourced from each installed LangChain provider package's model profiles.

    These profiles are not an exhaustive list of available models. If the model you want isn't shown, use option 2 instead (useful for newly released models that haven't been added to the profiles yet). See [Which models appear in the switcher](#which-models-appear-in-the-switcher) for the full set of criteria.
  2. Specify a model name directly as an argument, e.g. /model openai:gpt-4o. You can use any model supported by the chosen provider, regardless of whether it appears in the list from option 1. The model name will be passed to the API request.

  3. Specify the model at launch via --model, e.g.

    deepagents --model openai:gpt-4o

Which models appear in the switcher

The interactive /model selector builds its list dynamically — it is not a hardcoded list baked into the CLI. A model appears in the switcher when all of the following are true:

  1. The provider package is installed. Each provider (e.g. langchain-anthropic, langchain-openai) must be installed alongside deepagents-cli — either as an install extra (e.g. uv tool install 'deepagents-cli[anthropic]') or added later with uv tool upgrade deepagents-cli --with <package>. If a package is missing, its entire provider section is absent from the switcher.
  2. The model has a profile with tool_calling enabled. The CLI is a tool-calling agent, so it filters out models that don't support tool calling, as indicated in their profile data. This is the most common reason a model is missing from the list.
  3. The model accepts and produces text. Models whose profile explicitly sets text_inputs or text_outputs to false (e.g. embedding or image-generation models) are excluded.

Models defined in config.toml under [models.providers.<name>].models bypass the profile filter — they always appear in the switcher regardless of profile metadata. This is the recommended way to add models that are missing from the list.

Credential status does **not** affect whether a model is listed. The switcher shows all qualifying models and displays a credential indicator next to each provider header: a checkmark for confirmed credentials, a warning for missing credentials, or a question mark when credential status is unknown. You can still select a model with missing credentials — the provider will report an authentication error at request time.

Troubleshooting missing models

Symptom Likely cause Fix
Entire provider missing from switcher Provider package not installed Install the package (e.g. uv tool upgrade deepagents-cli --with langchain-groq)
Provider shown but specific model missing Model profile has tool_calling: false or no profile exists Add the model to [models.providers.<name>].models in config.toml, or use /model <provider>:<model> directly
Provider shows ⚠ "missing credentials" API key env var not set Set the credential env var from the Provider reference table
Provider shows ? "credentials unknown" Provider uses non-standard auth that the CLI can't verify Credentials may still work — try switching to the model. If auth fails, check the provider's docs

Setting a default model

You can set a persistent default model that will be used for all future CLI launches:

  • Via model selector: Open /model, navigate to the desired model, and press Ctrl+S to pin it as the default. Pressing Ctrl+S again on the current default clears it.

  • Via command: /model --default provider:model (e.g., /model --default anthropic:claude-opus-4-6)

  • Via config file: Set [models].default in ~/.deepagents/config.toml (see Configuration).

  • From the shell:

    deepagents --default-model anthropic:claude-opus-4-6

To view the current default:

deepagents --default-model

To clear the default:

  • From the shell:

    deepagents --clear-default-model
  • Via command: /model --default --clear

  • Via model selector: Press Ctrl+S on the currently pinned default model.

Without a default, the CLI will default to the most recently used model.

Model resolution order

When the CLI launches, it resolves which model to use in the following order:

  1. --model flag always wins when provided.
  2. [models].default in ~/.deepagents/config.toml — The user's intentional long-term preference.
  3. [models].recent in ~/.deepagents/config.toml — The last model switched to via /model. Written automatically; never overwrites [models].default.
  4. Environment auto-detection — Falls back to the first available startup credential, checked in order: OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY, GOOGLE_CLOUD_PROJECT (Vertex AI).

This startup fallback intentionally checks only those four credentials. Other supported providers (for example, Groq) are still available via --model, /model, and saved defaults ([models].default / [models].recent).

Model routers and proxies

Model routers like OpenRouter and LiteLLM provide access to models from multiple providers through a single endpoint.

Use the dedicated integration packages for these services:

:::python

Router Package Config
OpenRouter langchain-openrouter openrouter:<model> (built-in, see Provider reference)
:::

:::js

Router Package
OpenRouter langchain-openrouter
:::

OpenRouter is a built-in provider — install the package and use it directly:

uv tool install 'deepagents-cli[openrouter]'

Advanced configuration

For detailed configuration of provider params, profile overrides, custom base URLs, compatible APIs, arbitrary providers, and lifecycle hooks, see Configuration.