Skip to content

WentseChen/Verlog

 
 

Repository files navigation

Verlog: A Multi-turn RL framework for LLM agents

Verlog is a well-tuned multi-turn RL framework built for long-horizon LLM agentic tasks. It extends VeRL and BALROG, and follows the core design principles of pytorch-a2c-ppo-acktr-gail, while introducing tailored modifications for efficient multi-turn training.

Key features:

🧠 Turn-Level Abstraction: To handle extremely long episodes, we treat each turn as an independent training sample. This eliminates the need to encode the entire trajectory into a single context window and allows for modular, customizable memory architectures.

🎯 Fixed-Turn Batching: To address the high variance in episode lengths across environments, we use fixed-turn batching. Each training batch contains a fixed number of turns. For incomplete episodes, we replace final rewards with value function estimates as the supervision signal.

🛠️ Tailored for Multi-Turn RL: To address the unique challenges of multi-turn RL, we introduce a set of targeted techniques such as Dual Discounting GAE and Critic Pre-training, combined with carefully tuned hyperparameters to ensure efficient and stable learning.

📊 Validated Across Challenging Environments: Our approach has been empirically validated on diverse environments characterized by long horizons and high episode length variance, including BabyAI, BabaIsAI, and Crafter. It consistently demonstrates stable learning dynamics and strong performance out of the box. For instance, in Crafter, episode lengths range from 70 to 400 steps, with an average around 190.

Main Results

  • Crafter Results:

    Metric Instruct-model Verlog (Ours)
    Rewards 5.80 10.44
    Trajectory Length 172.23 196.42

    Crafter's experiments are done with Qwen2.5-7B-Instruct model, using PPO algorithm, trained on 8xH100 GPUs with 82Gb memory for ~36 hours, corresponding to 170 PPO updates.

  • BabaIsAI Results (win rate)

    goto_win → 🏁; distr_obj → 🎁; two_room → 🚪; distr_obj_rule → 📏;
    maybe_break_stop → ⚠️;

    Model 🏁+🎁 🚪+🏁 🚪+🏁+📏 🚪+⚠️+🏁
    Instruct-model 0.66 ± 0.08 0.03 ± 0.03 0.22 ± 0.07 0.19 ± 0.07
    Verlog (Ours) 1.00 ± 0.00 1.0 0.89 ± 0.11 0.69

    BabaIsAI's experiments are done with Qwen2.5-3B-Instruct model, using PPO algorithm, trained on 4xA40 GPUs with 48Gb memory for ~24 hours, corresponding to 300 PPO updates.

  • BabyAI Results (win rate)

    Model goto pickup pick_up_seq_go_to open
    Instruct-model 0.88 ± 0.06 0.41 ± 0.09 0.22 ± 0.07 0.09 ± 0.05
    Verlog (Ours) 1.00 ± 0.00 1.00 ± 0.00 0.65 ± 0.16 0.94 ± 0.07

    BabyAI's experiments are done with Qwen2.5-3B-Instruct model, using PPO algorithm, trained on 4xA40 GPUs with 48Gb memory for ~24 hours, corresponding to 300 PPO updates.

Installation

  • create a conda environment
conda create -n verlog python=3.10
conda activate verlog
  • install Balrog as a temporary dependency
git clone https://github.com/balrog-ai/BALROG.git
cd BALROG
pip install -e .
balrog-post-install
  • install Verlog
# 1. Clone this repository
# 2. install 
USE_MEGATRON=0 bash scripts/install_vllm_sglang_mcore.sh
# 3. Install Verlog
pip install --no-deps -e .

Get Started

We provide training examples and fine-tuned hyper-parameters list in Verlog/examples/Verlog.

About

Verlog: A Multi-turn RL framework for LLM agents

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.0%
  • Shell 2.9%
  • Roff 0.1%