Open
Conversation
|
你的PR提交成功,感谢你对开源项目的贡献! |
liym27
reviewed
Apr 8, 2026
| # Note: Only sharding stage 1 is considered in HybridParallelOptimizer. | ||
| # The sharding stage2 and stage3 optimizers are invoked in other api. | ||
| if hcg.get_sharding_parallel_world_size() > 1: | ||
| if hcg.get_sharding_parallel_world_size() > 1 and False: |
python/paddle/optimizer/optimizer.py
Outdated
| if isinstance(params_grads, list): | ||
| if self._grad_clip is not None: | ||
| params_grads = self._grad_clip(params_grads) | ||
| # if self._grad_clip is not None: |
| paddle.device.cuda.empty_cache() | ||
|
|
||
| curr_rank = paddle.distributed.get_rank() | ||
| world_size = paddle.distributed.get_world_size() |
Contributor
There was a problem hiding this comment.
这里应该是获取 world_size 还是 sharding group size
Contributor
Author
There was a problem hiding this comment.
仅开fsdp的情况不影响,现已修改为使用 sharding group。
| self, model, mesh=None, fsdp_unit_layers=None, moe_layers_name=None | ||
| ): | ||
| self.model = model | ||
| self.mesh = None |
Contributor
There was a problem hiding this comment.
self.mesh 被强制设置为 None,这符合预期吗?
| ctx.layer = layer | ||
| ctx.comm_manager = comm_manager | ||
| ctx.recursive = recursive | ||
| return inputs |
Contributor
There was a problem hiding this comment.
FusionBackwardHook.forward 有 return inputs if len(inputs) > 1 else inputs[0],这里不一致可能导致下游层收到意外的 tuple
| class FSDPBufferManager: | ||
| def __init__(self, model, mesh, fsdp_unit_layers=None): | ||
| self.model = model | ||
| # self._fsdp_group = mesh.get_group("dp") |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
PR Category
Distributed Strategy
PR Types
New features
Description
add FSDP
是否引起精度变化
否