feature(pu): add atari/dmc multitask and balance pipeline in ScaleZero paper#451
Merged
puyuan1996 merged 102 commits intomainfrom Jan 8, 2026
Merged
feature(pu): add atari/dmc multitask and balance pipeline in ScaleZero paper#451puyuan1996 merged 102 commits intomainfrom
puyuan1996 merged 102 commits intomainfrom
Conversation
…er and fix solved gpu batch-size bug
…dilab/LightZero into dev-multitask-balance-clean
…arnableScale in balance pipeline
…_curriculum_to_encoder option
…ro.py and unizero.py
PaParaZz1
reviewed
Jan 6, 2026
PaParaZz1
requested changes
Jan 6, 2026
…o lzero/entry/utils.py
PaParaZz1
approved these changes
Jan 8, 2026
| MCTS stage 3: Backup | ||
| At the end of the simulation, the statistics along the trajectory are updated. | ||
| """ | ||
| # search_depth is used for rope in UniZero |
Member
There was a problem hiding this comment.
为啥 ctree_sampled 这边,没有根据用不用 rope(timestep) 划分分支
Collaborator
Author
There was a problem hiding this comment.
sampled还不支持rope,加到todo里了
|
|
||
| # Clear caches if the current steps are a multiple of the clear interval | ||
| if current_steps % clear_interval == 0: | ||
| if current_steps is not None and current_steps % clear_interval == 0: |
Collaborator
Author
There was a problem hiding this comment.
目前如果sample_type='transition',是按照 game_segment_length 启发式设置的
|
|
||
| # Log mapping | ||
| self.logits_key_mapping = { | ||
| 'policy': 'logits_policy', |
Member
There was a problem hiding this comment.
感觉 clip 还是在 encoder 和 transformer backbone 弄吧,head 的可以去掉了
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This pull request implements the core components of the ScaleZero paper by introducing a multi-task, balanced training pipeline for Atari and DeepMind Control (DMC) environments.
To enhance stability and performance in this new multi-task setting, several key improvements and bug fixes were made. We replaced BatchNorm with the more robust LayerNorm, corrected a critical bug that caused the kv_cache to be improperly overwritten, and fixed the state reset logic in _reset_eval() and _reset_collect() to ensure accurate evaluation.
Additionally, the PR introduces target-entropy control for better policy optimization, makes the number of MCTS simulations configurable for evaluation, and integrates relevant updates from the longrun PR #400 to maintain code consistency.
本次 PR 核心是实现了 ScaleZero 论文的关键部分,为 Atari 和 DeepMind Control (DMC) 环境引入了一套多任务(multi-task)且均衡(balanced)的训练流水线。
为确保在多任务场景下的稳定性和高性能,我们进行了一系列关键优化与修复:将不稳定的 BatchNorm 替换为更鲁棒的 LayerNorm;修复了导致状态错误的 kv_cache 重写 Bug;并修正了 _reset_eval() 和 _reset_collect() 中的状态重置逻辑,以保证评估的准确性。
此外,本次更新还引入了 target-entropy 控制机制以优化策略,并使评估阶段的 MCTS 模拟次数变为可配置项。同时,我们整合了 longrun PR #400 的相关变更,以保持代码库的统一和同步。