Commit 0faedbf
authored
Bump paddlepaddle from 2.6.2 to 3.3.0 in /tests (#33591)
Bumps [paddlepaddle](https://github.com/paddlepaddle/paddle) from 2.6.2
to 3.3.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/paddlepaddle/paddle/releases">paddlepaddle's
releases</a>.</em></p>
<blockquote>
<h2>PaddlePaddle 3.2.2 Release Note</h2>
<h1>重要更新</h1>
<p>飞桨框架3.2.2版本在分布式并行、算子机制、硬件适配三个方面完成多项优化与升级,进一步提升框架性能与稳定性。</p>
<h2>1. 分布式训练</h2>
<ul>
<li>优化 FlexCheckpoint 的重切分通信流程; 为 paddle.nn.Layer 新增 full 接口,
用于返回完整模型参数; 支持加载 HuggingFace 开源格式的 Checkpoint。(<a
href="https://redirect.github.com/PaddlePaddle/Paddle/pull/76249">#76249</a>,
<a
href="https://redirect.github.com/PaddlePaddle/Paddle/pull/76291">#76291</a>)</li>
<li>为 group_sharded_optimizer_stage2 优化器新增 sharded_state_dict 函数。<a
href="https://redirect.github.com/PaddlePaddle/Paddle/pull/76311">#76311</a></li>
<li>为 paddle.load 接口修复加载 safetensor 文件 device_id 参数错误及 core_dump 问题。<a
href="https://redirect.github.com/PaddlePaddle/Paddle/pull/76317">#76317</a></li>
<li>新增 PipelineDatasetPreprocessor 机制,消除流水线并行策略中可能出现的内存泄漏问题。 <a
href="https://redirect.github.com/PaddlePaddle/Paddle/pull/76260">#76260</a></li>
</ul>
<h2>2. 算子机制</h2>
<ul>
<li>修复针对 BFloat16 list 场景下的 to_tensor 精度问题。 <a
href="https://redirect.github.com/PaddlePaddle/Paddle/pull/76242">#76242</a></li>
</ul>
<h2>3. 硬件适配</h2>
<ul>
<li>修改了独立的 XPU 内存监控模块,以确保与最新的内存监控逻辑保持一致。 <a
href="https://redirect.github.com/PaddlePaddle/Paddle/pull/76056">#76056</a></li>
</ul>
<h2>4. 贡献者名单</h2>
<p>qw86972190, xingmingyyj, zhangbo9674, zhangyuqin1998</p>
<h1>Important Updates</h1>
<p>PaddlePaddle Framework version 3.2.2 features multiple optimizations
and upgrades across Distributed Parallelism, Operator Mechanism, and
Hardware Adaptation to further enhance the framework's performance and
stability.</p>
<h2>1. Distributed Training</h2>
<ul>
<li>Optimized the communication process for re-sharding in
FlexCheckpoint; added the full interface to paddle.nn.Layer for
returning complete model parameters; supported loading Checkpoints in
the HuggingFace open-source format. (<a
href="https://redirect.github.com/PaddlePaddle/Paddle/pull/76249">#76249</a>,
<a
href="https://redirect.github.com/PaddlePaddle/Paddle/pull/76291">#76291</a>)</li>
<li>Added the sharded_state_dict function to the
group_sharded_optimizer_stage2 optimizer. <a
href="https://redirect.github.com/PaddlePaddle/Paddle/pull/76311">#76311</a></li>
<li>Fixed errors regarding the device_id parameter and a core dump issue
when loading safetensor files using the paddle.load interface. <a
href="https://redirect.github.com/PaddlePaddle/Paddle/pull/76317">#76317</a></li>
<li>Introduced the PipelineDatasetPreprocessor mechanism to eliminate
potential memory leak issues in the pipeline parallelism strategy. <a
href="https://redirect.github.com/PaddlePaddle/Paddle/pull/76260">#76260</a></li>
</ul>
<h2>2. Operator Mechanisms</h2>
<ul>
<li>Fixed a precision issue in to_tensor for BFloat16 list scenarios. <a
href="https://redirect.github.com/PaddlePaddle/Paddle/pull/76242">#76242</a></li>
</ul>
<h2>3. Hardware Adaptation</h2>
<ul>
<li>Modified the independent XPU memory monitoring module to ensure
consistency with the latest memory monitoring logic. <a
href="https://redirect.github.com/PaddlePaddle/Paddle/pull/76056">#76056</a></li>
</ul>
<h2>4. List of Contributors</h2>
<p>qw86972190, xingmingyyj, zhangbo9674, zhangyuqin1998</p>
<h2>PaddlePaddle 3.2.0 Release Note</h2>
<ul>
<li><a
href="https://github.com/PaddlePaddle/Paddle/wiki/PaddlePaddle-3.2.0-Release-Note-EN">English
Version</a></li>
</ul>
<h1>重要更新</h1>
<p>飞桨框架3.2版本在大模型训练推理性能、硬件适配、主流大模型及高性能加速库的支持上进一步提升。</p>
<ul>
<li>大模型训练方面,飞桨框架在计算、并行策略、容错能力三方面进行了升级:
<ul>
<li>从基础计算性能层面,提出了存算重叠的稀疏掩码注意力计算FlashMask
V3,极致优化Attention的计算效率,同时还实现了高效的FP8混合精度效果无损训练技术。</li>
<li>在分布式并行策略层面,提出了动态自适应的显存卸载策略,实现存算最优均衡,再结合创新设计的显存友好的流水线并行调度,进一步降低显存开销。</li>
<li>增强了框架原生的容错能力,实现了大规模集群训练容错系统,可在不影响训练效率的前提下在线监测静默数据损坏等难以察觉的故障,并实现了高可用的检查点容灾方法,降低中断恢复损失。</li>
</ul>
</li>
<li>在硬件适配方面,面向类CUDA芯片,全面升级插件式适配方案。</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/PaddlePaddle/Paddle/commit/cbf3469113cd76b7d5f4cba7b8d7d5f55d9e9911"><code>cbf3469</code></a>
add dev pr check (<a
href="https://redirect.github.com/paddlepaddle/paddle/issues/77097">#77097</a>)</li>
<li><a
href="https://github.com/PaddlePaddle/Paddle/commit/e9fc013dad9700e74f04f6b19274a4e16b060e1c"><code>e9fc013</code></a>
[Distributed] support rebuild for DygraphShardingOptimizerV2 (<a
href="https://redirect.github.com/paddlepaddle/paddle/issues/77077">#77077</a>)</li>
<li><a
href="https://github.com/PaddlePaddle/Paddle/commit/6d964de9c17a5e8995b5597a4e7753b3866a3ed2"><code>6d964de</code></a>
[Bug Fix]Fix GradNodeAccumulation Copy Bug for release/3.3 (<a
href="https://redirect.github.com/paddlepaddle/paddle/issues/77076">#77076</a>)</li>
<li><a
href="https://github.com/PaddlePaddle/Paddle/commit/294e2001b3f40c1821ea54f43db98d293cbc69d9"><code>294e200</code></a>
[Cherry-Pick] Fix: Reduce FC memory in Sharding3 save (<a
href="https://redirect.github.com/paddlepaddle/paddle/issues/77066">#77066</a>)</li>
<li><a
href="https://github.com/PaddlePaddle/Paddle/commit/bac5a28edb691e7fb5bc7e2d0038e6c34eb86a05"><code>bac5a28</code></a>
[DLPack] Remove stride normalization when convert to DLPack (<a
href="https://redirect.github.com/paddlepaddle/paddle/issues/77063">#77063</a>)</li>
<li><a
href="https://github.com/PaddlePaddle/Paddle/commit/fd66eaf3f0e415f50f38bbff24fa84b9cc9f796a"><code>fd66eaf</code></a>
[XPU] fix xpu_top_p_sampling_heuristic_threshold (<a
href="https://redirect.github.com/paddlepaddle/paddle/issues/77025">#77025</a>)</li>
<li><a
href="https://github.com/PaddlePaddle/Paddle/commit/27c42ac133207ffa822bc1346bfe265b59872cb5"><code>27c42ac</code></a>
Revert "Reduce: align precision with PyTorch 2.9.1 (<a
href="https://redirect.github.com/paddlepaddle/paddle/issues/76590">#76590</a>)"
(<a
href="https://redirect.github.com/paddlepaddle/paddle/issues/77034">#77034</a>)</li>
<li><a
href="https://github.com/PaddlePaddle/Paddle/commit/b245f829f3d882e5357a466751c4b35954aeea22"><code>b245f82</code></a>
Revert "Merge the optimized kernel of fast_ln into layer_norm (<a
href="https://redirect.github.com/paddlepaddle/paddle/issues/76890">#76890</a>)"
(<a
href="https://redirect.github.com/paddlepaddle/paddle/issues/77030">#77030</a>)</li>
<li><a
href="https://github.com/PaddlePaddle/Paddle/commit/fd159d1ba9d4a2b20c56f6431a981c8e8731a12a"><code>fd159d1</code></a>
respect to MAX_JOBS env when using cpp extension (<a
href="https://redirect.github.com/paddlepaddle/paddle/issues/76870">#76870</a>)
(<a
href="https://redirect.github.com/paddlepaddle/paddle/issues/77011">#77011</a>)</li>
<li><a
href="https://github.com/PaddlePaddle/Paddle/commit/82e6069b62d5163b3f505c00c81a11984e9d990b"><code>82e6069</code></a>
Cherry pick of 76998,76999 (<a
href="https://redirect.github.com/paddlepaddle/paddle/issues/77001">#77001</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/paddlepaddle/paddle/compare/v2.6.2...v3.3.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>1 parent a6113a9 commit 0faedbf
1 file changed
+1
-1
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
13 | 13 | | |
14 | 14 | | |
15 | 15 | | |
16 | | - | |
| 16 | + | |
17 | 17 | | |
18 | 18 | | |
19 | 19 | | |
| |||
0 commit comments