Skip to content

Commit 39a0f2c

Browse files
Update accelerate requirement from <=1.12.0,>=0.26.0 to >=0.26.0,<=1.13.0 in /tools/who_what_benchmark (openvinotoolkit#3511)
Updates the requirements on [accelerate](https://github.com/huggingface/accelerate) to permit the latest version. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/huggingface/accelerate/releases">accelerate's releases</a>.</em></p> <blockquote> <h2>v1.13.0: Neuron support, IPEX removal, and distributed training fixes</h2> <h2>AWS Neuron support</h2> <p>We now have support for AWS Neuron (Trainium/Inferentia) devices. Thanks <a href="https://github.com/michaelbenayoun"><code>@​michaelbenayoun</code></a> for adding this.</p> <ul> <li>Neuron integration by <a href="https://github.com/michaelbenayoun"><code>@​michaelbenayoun</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3935">huggingface/accelerate#3935</a></li> </ul> <h3>XPU Improvements</h3> <p>We've removed IPEX dependency and improved device-agnostic code for XPU.</p> <ul> <li>using spawn instead of fork for XPU device by <a href="https://github.com/kaixuanliu"><code>@​kaixuanliu</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3884">huggingface/accelerate#3884</a></li> <li>Remove ipex by <a href="https://github.com/yao-matrix"><code>@​yao-matrix</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3883">huggingface/accelerate#3883</a></li> <li>enhance new codes to XPU, and make them be device agnostic by <a href="https://github.com/yao-matrix"><code>@​yao-matrix</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3890">huggingface/accelerate#3890</a></li> <li>Fix KMP_AFFINITY incorrectly set for non-CPU training by <a href="https://github.com/hexfaker"><code>@​hexfaker</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3912">huggingface/accelerate#3912</a></li> </ul> <h2>FSDP2 Improvements</h2> <p>We've added a bunch of important fixes for FSDP2 users: upcasting only grad-requiring params, better tied embedding errors, DCP optimizer loading, bf16 optimizer step crash fix, and torch &lt; 2.7.0 compatibility.</p> <ul> <li>Upcast FSDP2 parameters only if requires_grad by <a href="https://github.com/ojh31"><code>@​ojh31</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3848">huggingface/accelerate#3848</a></li> <li>Fix FSDP2 tied embedding errors with targeted ValueError guidance by <a href="https://github.com/amanzoni1"><code>@​amanzoni1</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3878">huggingface/accelerate#3878</a></li> <li>bug: fsdp cannot load optimizer state using dcp by <a href="https://github.com/flymin"><code>@​flymin</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3904">huggingface/accelerate#3904</a></li> <li>fix crash in optimizer.step when fsdp2 is enabled and model is bfloat16 by <a href="https://github.com/sywangyi"><code>@​sywangyi</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3905">huggingface/accelerate#3905</a></li> <li>Fix FSDP2 crash with ignored_params on torch &lt; 2.7.0 by <a href="https://github.com/Mr-Neutr0n"><code>@​Mr-Neutr0n</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3924">huggingface/accelerate#3924</a></li> </ul> <h2>DeepSpeed Sequence Parallelism</h2> <p>We've added several fixes to the DeepSpeed + Sequence Parallelism integration introduced in v1.12.0, including evaluation support during SP training and proper process group handling.</p> <ul> <li>[SP] fix loss computation example by <a href="https://github.com/kashif"><code>@​kashif</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3858">huggingface/accelerate#3858</a></li> <li>[SP and CP] error out if both CP and SP enabled by <a href="https://github.com/kashif"><code>@​kashif</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3862">huggingface/accelerate#3862</a></li> <li>DeepSpeed has its own process group by <a href="https://github.com/kashif"><code>@​kashif</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3916">huggingface/accelerate#3916</a></li> <li>[Deepspeed] skip device mesh creation when deepspeed and sp_size &gt;1 by <a href="https://github.com/kashif"><code>@​kashif</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3914">huggingface/accelerate#3914</a></li> <li>Enable evaluation during deepspeed Sequence Parallel by <a href="https://github.com/jp1924"><code>@​jp1924</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3917">huggingface/accelerate#3917</a></li> </ul> <h3>FP8</h3> <p>We've enhanced FP8 training. Thanks <a href="https://github.com/shimizust"><code>@​shimizust</code></a> for fixing torchao support.</p> <ul> <li>Fix FP8 torchao default config with padding and FSDP2 all-gather support by <a href="https://github.com/shimizust"><code>@​shimizust</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3831">huggingface/accelerate#3831</a></li> <li>Fix execution with Transformer Engine by <a href="https://github.com/ksivaman"><code>@​ksivaman</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3852">huggingface/accelerate#3852</a></li> <li>add MS-AMP deprecation warnings by <a href="https://github.com/neha222222"><code>@​neha222222</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3857">huggingface/accelerate#3857</a></li> </ul> <h3>Performance</h3> <p>Accelerate now imports faster by deferring heavy dependencies, and torch.compile hooks are disabled lazily.</p> <ul> <li>Faster import by <a href="https://github.com/SunMarc"><code>@​SunMarc</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3953">huggingface/accelerate#3953</a></li> <li>lazy compile disable by <a href="https://github.com/SunMarc"><code>@​SunMarc</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3947">huggingface/accelerate#3947</a></li> <li>Disable hook compile by <a href="https://github.com/SunMarc"><code>@​SunMarc</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3888">huggingface/accelerate#3888</a></li> </ul> <h3>Minor fixes</h3> <ul> <li>Allow non-Tensor values in a batch with dispatch_batches=True by <a href="https://github.com/tomaarsen"><code>@​tomaarsen</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3850">huggingface/accelerate#3850</a></li> <li>fix module and optimizer parameter mismatch before prepare_tp_ by <a href="https://github.com/naomili0924"><code>@​naomili0924</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3845">huggingface/accelerate#3845</a></li> <li>Fix KeyError in extract_model_from_parallel for partial torch.compile by <a href="https://github.com/amanzoni1"><code>@​amanzoni1</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3881">huggingface/accelerate#3881</a></li> <li>Fix hf_device_map device index comparison in prepare_model by <a href="https://github.com/rezaqorbani"><code>@​rezaqorbani</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3895">huggingface/accelerate#3895</a></li> <li>Fix StatefulDataLoader KeyError with num_workers &gt; 0 by <a href="https://github.com/veeceey"><code>@​veeceey</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3931">huggingface/accelerate#3931</a></li> <li>Fix stateful dataloader DDP by <a href="https://github.com/SunMarc"><code>@​SunMarc</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3952">huggingface/accelerate#3952</a></li> <li>Fix: Remove duplicate W&amp;B initialization in offline mode by <a href="https://github.com/shantanugupta2004"><code>@​shantanugupta2004</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/3886">huggingface/accelerate#3886</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/huggingface/accelerate/commit/e6ee1337014f6f97c3cf58f806aa28a0109f09a5"><code>e6ee133</code></a> Release: v1.13.0</li> <li><a href="https://github.com/huggingface/accelerate/commit/2a7e27f75d25def4c2cd6011afe56c47b7b9438b"><code>2a7e27f</code></a> Fix testing ci (<a href="https://redirect.github.com/huggingface/accelerate/issues/3955">#3955</a>)</li> <li><a href="https://github.com/huggingface/accelerate/commit/0990ded55acd8c4f363e767e6c851cc3701d1c20"><code>0990ded</code></a> Faster import (<a href="https://redirect.github.com/huggingface/accelerate/issues/3953">#3953</a>)</li> <li><a href="https://github.com/huggingface/accelerate/commit/5cf9cf88a4deeb7d627fc3efadd9af5a77353888"><code>5cf9cf8</code></a> fix-stateful-dataloader (<a href="https://redirect.github.com/huggingface/accelerate/issues/3952">#3952</a>)</li> <li><a href="https://github.com/huggingface/accelerate/commit/beed693e4f58820ad97c79e4373af944c8fdb3d4"><code>beed693</code></a> Prepare TP fix (<a href="https://redirect.github.com/huggingface/accelerate/issues/3945">#3945</a>)</li> <li><a href="https://github.com/huggingface/accelerate/commit/8067abae81abbd176af65dc9694f9f99dacf3985"><code>8067aba</code></a> Fix StatefulDataLoader KeyError with num_workers &gt; 0 (<a href="https://redirect.github.com/huggingface/accelerate/issues/3931">#3931</a>)</li> <li><a href="https://github.com/huggingface/accelerate/commit/8ec83c8aa264baf04a298e02aa07d1540463cce2"><code>8ec83c8</code></a> Fix FSDP2 crash with ignored_params on torch &lt; 2.7.0 (<a href="https://redirect.github.com/huggingface/accelerate/issues/3924">#3924</a>)</li> <li><a href="https://github.com/huggingface/accelerate/commit/7554afbc7acb936cf888e68421012654f4e2016c"><code>7554afb</code></a> Fix mutable default in Megatron init and IndexError on empty ModuleList (<a href="https://redirect.github.com/huggingface/accelerate/issues/3944">#3944</a>)</li> <li><a href="https://github.com/huggingface/accelerate/commit/23f2ab396713bc915f726c1af4a066e1654f854c"><code>23f2ab3</code></a> Fix logging logic when in_order is set to True (<a href="https://redirect.github.com/huggingface/accelerate/issues/3280">#3280</a>)</li> <li><a href="https://github.com/huggingface/accelerate/commit/58c3605fee95c81633a5619af8f823a3cb0610cb"><code>58c3605</code></a> docs: update low-precision training docs to reflect MS-AMP deprecation (<a href="https://redirect.github.com/huggingface/accelerate/issues/3929">#3929</a>)</li> <li>Additional commits viewable in <a href="https://github.com/huggingface/accelerate/compare/v0.26.0...v1.13.0">compare view</a></li> </ul> </details> <br /> Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
1 parent 4536200 commit 39a0f2c

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

tools/who_what_benchmark/requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
accelerate>=0.26.0,<=1.12.0
1+
accelerate>=0.26.0,<=1.13.0
22
transformers[sentencepiece]>=4.35.2,<=4.57.6
33
sentence-transformers>=2.2.2,<=5.2.2
44
openvino-genai

0 commit comments

Comments
 (0)