- We propose WebDancer, a novel end-to-end agentic training framework designed to enhance the multi-step information-seeking capabilities of web-based agents.
- We introduce a four-stage training paradigm comprising browsing data construction, trajectory sampling, supervised fine-tuning for effective cold start, and reinforcement learning for improved generalization, enabling the agent to autonomously acquire robust search and reasoning skills.
- Our data-centric approach integrates trajectory-level supervision and online learning to develop a scalable pipeline for training agentic systems.
- We instantiate this framework in a ReAct-based agent and conduct extensive experiments on GAIA and WebWalkerQA benchmarks. Results demonstrate that WebDancer achieves strong performance across diverse tasks, validating the effectiveness of our proposed paradigm and providing systematic insights for future agent development.
conda create -n webdancer python=3.12
pip install -r requirements.txtDownload the WebDancer model from 🤗 HuggingFace and deploy it using the provided scripts with sglang.
cd scripts
bash depoly_model.sh WebDancer_PATHNote: Replace
WebDancer_PATHwith the actual path to the downloaded model.
Edit the following keys in scripts/run_demo.sh:
GOOGLE_SEARCH_KEY, you can get it from serpapi or serper.JINA_API_KEY, you can get it from jina.DASHSCOPE_API_KEY, you can get it from dashscope.
Then, launch the demo with Gradio to interact with the WebDancer model:
cd scripts
bash run_demo.shWe provide demos for WebWalkerQA, GAIA and Daily Use. Our model can execute the long-horizon tasks with multiple steps and complex reasoning, such as web traversal, information seeking and question answering.
⌛️ The deployment of models and demos will be updated soon.
The sampled QA data can be found in
datasets/sample_qa.jsonl.
The sampled trajectory data for SFT can be found in
datasets/sample_qa.jsonl.
For SFT training, you can refer to the training scipts of LLaMA-Factory.
We use the modified verl for RL training.
This work is implemented based on LLaMA-Factory and verl. We greatly appreciate their valuable contributions to the community, especially for WebThinker.
If this work is helpful, please kindly cite as:
@misc{wu2025webdancer,
title={WebDancer: Towards Autonomous Information Seeking Agency},
author={Jialong Wu and Baixuan Li and Runnan Fang and Wenbiao Yin and Liwen Zhang and Zhengwei Tao and Dingchu Zhang and Zekun Xi and Yong Jiang and Pengjun Xie and Fei Huang and Jingren Zhou},
year={2025},
eprint={2505.22648},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.22648},
}



