Welcome to Auto-Deep-Research! Auto-Deep-Research is a open-source and cost-efficient alternative to OpenAI's Deep Research, based on AutoAgent framework.
- 🏆 High Performance: Ranks the #1 spot among open-sourced methods, delivering comparable performance to OpenAI's Deep Research.
- 🌐 Universal LLM Support: Seamlessly integrates with A Wide Range of LLMs (e.g., OpenAI, Anthropic, Deepseek, vLLM, Grok, Huggingface ...)
- 🔀 Flexible Interaction: Supports both function-calling and non-function-calling interaction LLMs.
- 💰 Cost-Efficient: Open-source alternative to Deep Research's $200/month subscription with your own pay-as-you-go LLM API keys.
- 📁 File Support: Handles file uploads for enhanced data interaction
- 🚀 One-Click Launch: Get started instantly with a simple
auto deep-research
command - Zero Configuration needed, truly out-of-the-box experience.
🚀 Own your own personal assistant with much lower cost. Try 🔥Auto-Deep-Research🔥 Now!
- [2025, Feb 16]: 🎉🎉We've cleaned up the codebase of AutoAgent, removed the irrelevant parts for Auto-Deep-Research and released the first version of Auto-Deep-Research.
- ✨ Features
- 🔥 News
- 🧐 Why to release Auto-Deep-Research?
- ⚡ Quick Start
- ☑️ Todo List
- 📖 Documentation
- 🤝 Join the Community
- 🙏 Acknowledgements
- 🌟 Cite
After releasing AutoAgent (previously known as MetaChain) for a week, we've observed three compelling reasons to introduce Auto-Deep-Research:
-
Community Interest
We noticed significant community interest in our Deep Research alternative functionality. In response, we've streamlined the codebase by removing non-Deep-Research related components to create a more focused tool. -
Framework Extensibility
Auto-Deep-Research serves as the first ready-to-use product built on AutoAgent, demonstrating how quickly and easily you can create powerful Agent Apps using our framework. -
Community-Driven Improvements
We've incorporated valuable community feedback from the first week, introducing features like one-click launch and enhanced LLM compatibility to make the tool more accessible and versatile.
Auto-Deep-Research represents our commitment to both the community's needs and the demonstration of AutoAgent's potential as a foundation for building practical AI applications.
conda create -n auto_deep_research python=3.10
conda activate auto_deep_research
git clone https://github.com/HKUDS/Auto-Deep-Research.git
cd Auto-Deep-Research
pip install -e .
We use Docker to containerize the agent-interactive environment. So please install Docker first. You don't need to manually pull the pre-built image, because we have let Auto-Deep-Research automatically pull the pre-built image based on your architecture of your machine.
Create a environment variable file, just like .env.template
, and set the API keys for the LLMs you want to use. Not every LLM API Key is required, use what you need.
You can run auto deep-research
to start Auto-Deep-Research. Some configuration of this command is shown below.
--container_name
: Name of the Docker container (default: 'deepresearch')--port
: Port for the container (default: 12346)COMPLETION_MODEL
: Specify the LLM model to use, you should follow the name of Litellm to set the model name. (Default:claude-3-5-sonnet-20241022
)DEBUG
: Enable debug mode for detailed logs (default: False)API_BASE_URL
: The base URL for the LLM provider (default: None)FN_CALL
: Enable function calling (default: None). Most of time, you could ignore this option because we have already set the default value based on the model name.
We will show you how easy it is to start Auto-Deep-Research with different LLM providers.
- set the
ANTHROPIC_API_KEY
in the.env
file.
ANTHROPIC_API_KEY=your_anthropic_api_key
- run the following command to start Auto-Deep-Research.
auto deep-research # default model is claude-3-5-sonnet-20241022
- set the
OPENAI_API_KEY
in the.env
file.
OPENAI_API_KEY=your_openai_api_key
- run the following command to start Auto-Deep-Research.
COMPLETION_MODEL=gpt-4o auto deep-research
- set the
MISTRAL_API_KEY
in the.env
file.
MISTRAL_API_KEY=your_mistral_api_key
- run the following command to start Auto-Deep-Research.
COMPLETION_MODEL=mistral/mistral-large-2407 auto deep-research
- set the
GEMINI_API_KEY
in the.env
file.
GEMINI_API_KEY=your_gemini_api_key
- run the following command to start Auto-Deep-Research.
COMPLETION_MODEL=gemini/gemini-2.0-flash auto deep-research
- set the
HUGGINGFACE_API_KEY
in the.env
file.
HUGGINGFACE_API_KEY=your_huggingface_api_key
- run the following command to start Auto-Deep-Research.
COMPLETION_MODEL=huggingface/meta-llama/Llama-3.3-70B-Instruct auto deep-research
- set the
GROQ_API_KEY
in the.env
file.
GROQ_API_KEY=your_groq_api_key
- run the following command to start Auto-Deep-Research.
COMPLETION_MODEL=groq/deepseek-r1-distill-llama-70b auto deep-research
- set the
OPENAI_API_KEY
in the.env
file.
OPENAI_API_KEY=your_api_key_for_openai_compatible_endpoints
- run the following command to start Auto-Deep-Research.
COMPLETION_MODEL=openai/grok-2-latest API_BASE_URL=https://api.x.ai/v1 auto deep-research
We recommend using OpenRouter as LLM provider of DeepSeek-R1 temporarily. Because official API of DeepSeek-R1 can not be used efficiently.
- set the
OPENROUTER_API_KEY
in the.env
file.
OPENROUTER_API_KEY=your_openrouter_api_key
- run the following command to start Auto-Deep-Research.
COMPLETION_MODEL=openrouter/deepseek/deepseek-r1 auto deep-research
- set the
DEEPSEEK_API_KEY
in the.env
file.
DEEPSEEK_API_KEY=your_deepseek_api_key
- run the following command to start Auto-Deep-Research.
COMPLETION_MODEL=deepseek/deepseek-chat auto deep-research
You can import the browser cookies to the browser environment to let the agent better access some specific websites. For more details, please refer to the cookies folder.
More features coming soon! 🚀 Web GUI interface under development.
Auto-Deep-Research is continuously evolving! Here's what's coming:
- 🖥️ GUI Agent: Supporting Computer-Use agents with GUI interaction
- 🏗️ Code Sandboxes: Supporting additional environments like E2B
- 🎨 Web Interface: Developing comprehensive GUI for better user experience
Have ideas or suggestions? Feel free to open an issue! Stay tuned for more exciting updates! 🚀
A more detailed documentation is coming soon 🚀, and we will update in the Documentation page.
If you think the Auto-Deep-Research is helpful, you can join our community by:
- Join our Slack workspace - Here we talk about research, architecture, and future development.
- Join our Discord server - This is a community-run server for general discussion, questions, and feedback.
- Read or post Github Issues - Check out the issues we're working on, or add your own ideas.
Rome wasn't built in a day. Auto-Deep-Research is built on the AutoAgent framework. We extend our sincere gratitude to all the pioneering works that have shaped AutoAgent, including OpenAI Swarm for framework architecture inspiration, Magentic-one for the three-agent design insights, OpenHands for documentation structure, and many other excellent projects that contributed to agent-environment interaction design. Your innovations have been instrumental in making both AutoAgent and Auto-Deep-Research possible.
@misc{AutoAgent,
title={{AutoAgent: A Fully-Automated and Zero-Code Framework for LLM Agents}},
author={Jiabin Tang, Tianyu Fan, Chao Huang},
year={2025},
eprint={202502.05957},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2502.05957},
}