This guide explains how to configure DeerFlow for your environment.
config.example.yaml contains a config_version field that tracks schema changes. When the example version is higher than your local config.yaml, the application emits a startup warning:
WARNING - Your config.yaml (version 0) is outdated — the latest version is 1.
Run `make config-upgrade` to merge new fields into your config.
- Missing
config_versionin your config is treated as version 0. - Run
make config-upgradeto auto-merge missing fields (your existing values are preserved, a.bakbackup is created). - When changing the config schema, bump
config_versioninconfig.example.yaml.
Configure the LLM models available to the agent:
models:
- name: gpt-4 # Internal identifier
display_name: GPT-4 # Human-readable name
use: langchain_openai:ChatOpenAI # LangChain class path
model: gpt-4 # Model identifier for API
api_key: $OPENAI_API_KEY # API key (use env var)
max_tokens: 4096 # Max tokens per request
temperature: 0.7 # Sampling temperatureSupported Providers:
- OpenAI (
langchain_openai:ChatOpenAI) - Anthropic (
langchain_anthropic:ChatAnthropic) - DeepSeek (
langchain_deepseek:ChatDeepSeek) - Claude Code OAuth (
deerflow.models.claude_provider:ClaudeChatModel) - Codex CLI (
deerflow.models.openai_codex_provider:CodexChatModel) - Any LangChain-compatible provider
CLI-backed provider examples:
models:
- name: gpt-5.4
display_name: GPT-5.4 (Codex CLI)
use: deerflow.models.openai_codex_provider:CodexChatModel
model: gpt-5.4
supports_thinking: true
supports_reasoning_effort: true
- name: claude-sonnet-4.6
display_name: Claude Sonnet 4.6 (Claude Code OAuth)
use: deerflow.models.claude_provider:ClaudeChatModel
model: claude-sonnet-4-6
max_tokens: 4096
supports_thinking: trueAuth behavior for CLI-backed providers:
CodexChatModelloads Codex CLI auth from~/.codex/auth.json- The Codex Responses endpoint currently rejects
max_tokensandmax_output_tokens, soCodexChatModeldoes not expose a request-level token cap ClaudeChatModelacceptsCLAUDE_CODE_OAUTH_TOKEN,ANTHROPIC_AUTH_TOKEN,CLAUDE_CODE_OAUTH_TOKEN_FILE_DESCRIPTOR,CLAUDE_CODE_CREDENTIALS_PATH, or plaintext~/.claude/.credentials.json- On macOS, DeerFlow does not probe Keychain automatically. Use
scripts/export_claude_code_oauth.pyto export Claude Code auth explicitly when needed
To use OpenAI's /v1/responses endpoint with LangChain, keep using langchain_openai:ChatOpenAI and set:
models:
- name: gpt-5-responses
display_name: GPT-5 (Responses API)
use: langchain_openai:ChatOpenAI
model: gpt-5
api_key: $OPENAI_API_KEY
use_responses_api: true
output_version: responses/v1For OpenAI-compatible gateways (for example Novita or OpenRouter), keep using langchain_openai:ChatOpenAI and set base_url:
models:
- name: novita-deepseek-v3.2
display_name: Novita DeepSeek V3.2
use: langchain_openai:ChatOpenAI
model: deepseek/deepseek-v3.2
api_key: $NOVITA_API_KEY
base_url: https://api.novita.ai/openai
supports_thinking: true
when_thinking_enabled:
extra_body:
thinking:
type: enabled
- name: minimax-m2.5
display_name: MiniMax M2.5
use: langchain_openai:ChatOpenAI
model: MiniMax-M2.5
api_key: $MINIMAX_API_KEY
base_url: https://api.minimax.io/v1
max_tokens: 4096
temperature: 1.0 # MiniMax requires temperature in (0.0, 1.0]
supports_vision: true
- name: minimax-m2.5-highspeed
display_name: MiniMax M2.5 Highspeed
use: langchain_openai:ChatOpenAI
model: MiniMax-M2.5-highspeed
api_key: $MINIMAX_API_KEY
base_url: https://api.minimax.io/v1
max_tokens: 4096
temperature: 1.0 # MiniMax requires temperature in (0.0, 1.0]
supports_vision: true
- name: openrouter-gemini-2.5-flash
display_name: Gemini 2.5 Flash (OpenRouter)
use: langchain_openai:ChatOpenAI
model: google/gemini-2.5-flash-preview
api_key: $OPENAI_API_KEY
base_url: https://openrouter.ai/api/v1If your OpenRouter key lives in a different environment variable name, point api_key at that variable explicitly (for example api_key: $OPENROUTER_API_KEY).
Thinking Models: Some models support "thinking" mode for complex reasoning:
models:
- name: deepseek-v3
supports_thinking: true
when_thinking_enabled:
extra_body:
thinking:
type: enabledGemini with thinking via OpenAI-compatible gateway:
When routing Gemini through an OpenAI-compatible proxy (Vertex AI OpenAI compat endpoint, AI Studio, or third-party gateways) with thinking enabled, the API attaches a thought_signature to each tool-call object returned in the response. Every subsequent request that replays those assistant messages must echo those signatures back on the tool-call entries or the API returns:
HTTP 400 INVALID_ARGUMENT: function call `<tool>` in the N. content block is
missing a `thought_signature`.
Standard langchain_openai:ChatOpenAI silently drops thought_signature when serialising messages. Use deerflow.models.patched_openai:PatchedChatOpenAI instead — it re-injects the tool-call signatures (sourced from AIMessage.additional_kwargs["tool_calls"]) into every outgoing payload:
models:
- name: gemini-2.5-pro-thinking
display_name: Gemini 2.5 Pro (Thinking)
use: deerflow.models.patched_openai:PatchedChatOpenAI
model: google/gemini-2.5-pro-preview # model name as expected by your gateway
api_key: $GEMINI_API_KEY
base_url: https://<your-openai-compat-gateway>/v1
max_tokens: 16384
supports_thinking: true
supports_vision: true
when_thinking_enabled:
extra_body:
thinking:
type: enabledFor Gemini accessed without thinking (e.g. via OpenRouter where thinking is not activated), the plain langchain_openai:ChatOpenAI with supports_thinking: false is sufficient and no patch is needed.
Organize tools into logical groups:
tool_groups:
- name: web # Web browsing and search
- name: file:read # Read-only file operations
- name: file:write # Write file operations
- name: bash # Shell command executionConfigure specific tools available to the agent:
tools:
- name: web_search
group: web
use: deerflow.community.tavily.tools:web_search_tool
max_results: 5
# api_key: $TAVILY_API_KEY # OptionalBuilt-in Tools:
web_search- Search the web (Tavily)web_fetch- Fetch web pages (Jina AI)ls- List directory contentsread_file- Read file contentswrite_file- Write file contentsstr_replace- String replacement in filesbash- Execute bash commands
DeerFlow supports multiple sandbox execution modes. Configure your preferred mode in config.yaml:
Local Execution (runs sandbox code directly on the host machine):
sandbox:
use: deerflow.sandbox.local:LocalSandboxProvider # Local execution
allow_host_bash: false # default; host bash is disabled unless explicitly re-enabledDocker Execution (runs sandbox code in isolated Docker containers):
sandbox:
use: deerflow.community.aio_sandbox:AioSandboxProvider # Docker-based sandboxDocker Execution with Kubernetes (runs sandbox code in Kubernetes pods via provisioner service):
This mode runs each sandbox in an isolated Kubernetes Pod on your host machine's cluster. Requires Docker Desktop K8s, OrbStack, or similar local K8s setup.
sandbox:
use: deerflow.community.aio_sandbox:AioSandboxProvider
provisioner_url: http://provisioner:8002When using Docker development (make docker-start), DeerFlow starts the provisioner service only if this provisioner mode is configured. In local or plain Docker sandbox modes, provisioner is skipped.
See Provisioner Setup Guide for detailed configuration, prerequisites, and troubleshooting.
Choose between local execution or Docker-based isolation:
Option 1: Local Sandbox (default, simpler setup):
sandbox:
use: deerflow.sandbox.local:LocalSandboxProvider
allow_host_bash: falseallow_host_bash is intentionally false by default. DeerFlow's local sandbox is a host-side convenience mode, not a secure shell isolation boundary. If you need bash, prefer AioSandboxProvider. Only set allow_host_bash: true for fully trusted single-user local workflows.
Option 2: Docker Sandbox (isolated, more secure):
sandbox:
use: deerflow.community.aio_sandbox:AioSandboxProvider
port: 8080
auto_start: true
container_prefix: deer-flow-sandbox
# Optional: Additional mounts
mounts:
- host_path: /path/on/host
container_path: /path/in/container
read_only: falseWhen you configure sandbox.mounts, DeerFlow exposes those container_path values in the agent prompt so the agent can discover and operate on mounted directories directly instead of assuming everything must live under /mnt/user-data.
Configure the skills directory for specialized workflows:
skills:
# Host path (optional, default: ../skills)
path: /custom/path/to/skills
# Container mount path (default: /mnt/skills)
container_path: /mnt/skillsHow Skills Work:
- Skills are stored in
deer-flow/skills/{public,custom}/ - Each skill has a
SKILL.mdfile with metadata - Skills are automatically discovered and loaded
- Available in both local and Docker sandbox via path mapping
Per-Agent Skill Filtering:
Custom agents can restrict which skills they load by defining a skills field in their config.yaml (located at workspace/agents/<agent_name>/config.yaml):
- Omitted or
null: Loads all globally enabled skills (default fallback). [](empty list): Disables all skills for this specific agent.["skill-name"]: Loads only the explicitly specified skills.
Automatic conversation title generation:
title:
enabled: true
max_words: 6
max_chars: 60
model_name: null # Use first model in listThe default GitHub API rate limits are quite restrictive. For frequent project research, we recommend configuring a personal access token (PAT) with read-only permissions.
Configuration Steps:
- Uncomment the
GITHUB_TOKENline in the.envfile and add your personal access token - Restart the DeerFlow service to apply changes
DeerFlow supports environment variable substitution using the $ prefix:
models:
- api_key: $OPENAI_API_KEY # Reads from environmentCommon Environment Variables:
OPENAI_API_KEY- OpenAI API keyANTHROPIC_API_KEY- Anthropic API keyDEEPSEEK_API_KEY- DeepSeek API keyNOVITA_API_KEY- Novita API key (OpenAI-compatible endpoint)TAVILY_API_KEY- Tavily search API keyDEER_FLOW_CONFIG_PATH- Custom config file path
The configuration file should be placed in the project root directory (deer-flow/config.yaml), not in the backend directory.
DeerFlow searches for configuration in this order:
- Path specified in code via
config_pathargument - Path from
DEER_FLOW_CONFIG_PATHenvironment variable config.yamlin current working directory (typicallybackend/when running)config.yamlin parent directory (project root:deer-flow/)
- Place
config.yamlin project root - Not inbackend/directory - Never commit
config.yaml- It's already in.gitignore - Use environment variables for secrets - Don't hardcode API keys
- Keep
config.example.yamlupdated - Document all new options - Test configuration changes locally - Before deploying
- Use Docker sandbox for production - Better isolation and security
- Ensure
config.yamlexists in the project root directory (deer-flow/config.yaml) - The backend searches parent directory by default, so root location is preferred
- Alternatively, set
DEER_FLOW_CONFIG_PATHenvironment variable to custom location
- Verify environment variables are set correctly
- Check that
$prefix is used for env var references
- Check that
deer-flow/skills/directory exists - Verify skills have valid
SKILL.mdfiles - Check
skills.pathconfiguration if using custom path
- Ensure Docker is running
- Check port 8080 (or configured port) is available
- Verify Docker image is accessible
See config.example.yaml for complete examples of all configuration options.