Commit 16150e4
authored
Bump the pip-dependencies group across 5 directories with 10 updates (openvinotoolkit#3711)
Updates the requirements on [peft](https://github.com/huggingface/peft),
[pydantic](https://github.com/pydantic/pydantic),
[pytest](https://github.com/pytest-dev/pytest),
[timm](https://github.com/huggingface/pytorch-image-models),
[datasets](https://github.com/huggingface/datasets),
[vector-quantize-pytorch](https://github.com/lucidrains/vector-quantizer-pytorch),
[numpy](https://github.com/numpy/numpy),
[sentence-transformers](https://github.com/huggingface/sentence-transformers),
[pandas](https://github.com/pandas-dev/pandas) and
[huggingface-hub](https://github.com/huggingface/huggingface_hub) to
permit the latest version.
Updates `peft` from 0.18.1 to 0.19.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/huggingface/peft/releases">peft's
releases</a>.</em></p>
<blockquote>
<h2>v0.19.0</h2>
<h1>Highlights</h1>
<p>This PEFT release contains no less than nine new PEFT methods,
described below. It also contains numerous enhancements that should make
PEFT more useful to many users.</p>
<!-- raw HTML omitted -->
<h2>New Methods</h2>
<h3>GraLoRA</h3>
<p><a
href="https://github.com/yeonjoon-jung01"><code>@yeonjoon-jung01</code></a>
added <a href="https://arxiv.org/abs/2505.20355">"GraLoRA: Granular
Low-Rank Adaptation for Parameter-Efficient Fine-Tuning"</a> to
PEFT (<a
href="https://redirect.github.com/huggingface/peft/issues/2851">#2851</a>).
This method subdivides the base weight into smaller blocks and applies
LoRA to those. This more granular adaptation promises to increase
expressiveness and improve performance, especially at higher ranks
(64+), closing the gap to full fine-tuning.</p>
<h3>BD-LoRA</h3>
<p><a href="https://github.com/Conzel"><code>@Conzel</code></a>
contributed BD-LoRA: <a
href="https://openreview.net/forum?id=1cjLvtFOmL">"Block-Diagonal
LoRA for Eliminating Communication Overhead in Tensor Parallel LoRA
Serving"</a> (<a
href="https://redirect.github.com/huggingface/peft/issues/2895">#2895</a>).
With BD-LoRA, the LoRA weights are implemented in a block-diagonal way.
This allows to reduce communication overhead when using tensor
parallelism (TP) and thus faster serving.</p>
<p>There is an experiment branch for BD-LoRA support in vLLM: <a
href="https://redirect.github.com/vllm-project/vllm/issues/28136">vllm-project/vllm#28136</a>.</p>
<h3>Cartridges</h3>
<p>Thanks to <a
href="https://github.com/kashif"><code>@kashif</code></a>, PEFT now
also supports <a href="https://arxiv.org/abs/2506.06266">Cartridges</a>
(<a
href="https://redirect.github.com/huggingface/peft/issues/2953">#2953</a>).
The main purpose of this method is to train a prefix to <a
href="https://hazyresearch.stanford.edu/blog/2025-06-08-cartridges">compress
a long context to a short size</a> and thus save on tokens. On a low
level, this is similar to <a
href="https://huggingface.co/docs/peft/package_reference/prefix_tuning">prefix
tuning</a>. The PR also added an <a
href="https://github.com/huggingface/peft/tree/main/examples/cartridge_self_study">example
recipe</a> to quickly get started.</p>
<h3>PVeRA</h3>
<p><a href="https://arxiv.org/abs/2512.07703">"PVeRA: Probabilistic
Vector-Based Random Matrix Adaptation"</a> was added to PEFT by <a
href="https://github.com/leofillioux"><code>@leofillioux</code></a> in
<a
href="https://redirect.github.com/huggingface/peft/issues/2952">#2952</a>.
It is an extension of <a
href="https://huggingface.co/docs/peft/package_reference/vera">VeRA</a>,
a PEFT method that uses weight sharing between layers to be especially
parameter efficient. PVeRA builds on top of that by adding a
probabilistic element, sampling from the shared parameters and promising
better performance overall.</p>
<h3>PSOFT</h3>
<p><a href="https://github.com/fei407"><code>@fei407</code></a> added
PSOFT, <a
href="https://openreview.net/forum?id=FSHrinMArK">"Efficient
Orthogonal Fine-Tuning with Principal Subspace Adaptation"</a>, to
PEFT in <a
href="https://redirect.github.com/huggingface/peft/issues/3037">#3037</a>.
Orthogonal fine-tuning techniques like <a
href="https://huggingface.co/docs/peft/package_reference/oft">OFT</a>
and <a
href="https://huggingface.co/docs/peft/package_reference/boft">BOFT</a>
are good at preserving the structure and thus capabilities of the
underlying base model. PSOFT improves efficiency of this technique by
constraining the adaptation to low-rank principal subspace.</p>
<h3>Lily</h3>
<p><a href="https://github.com/yibozhong"><code>@yibozhong</code></a>
added Lily: <a href="https://arxiv.org/abs/2407.09946">"Low-Rank
Interconnected Adaptation across Layers"</a> to PEFT in <a
href="https://redirect.github.com/huggingface/peft/issues/2563">#2563</a>.
Lily is on the surface similar to LoRA but has a sophisticated parameter
sharing scheme. The A parameters are shared blockwise (e.g. 4
consecutive q_proj layers share the same A). There is a pool of B
parameters that is shared globally, the actual B's are chosen in a
data-dependent way through a router. This allows Lily to use higher
ranks than LoRA while maintaining a low trainable parameter count.</p>
<h3>PEANuT</h3>
<p>In <a
href="https://redirect.github.com/huggingface/peft/issues/3084">#3084</a>,
<a href="https://arxiv.org/abs/2410.01870">"PEANuT:
Parameter-Efficient Adaptation with Weight-aware Neural
Tweakers"</a> was added to PEFT, again by <a
href="https://github.com/yibozhong"><code>@yibozhong</code></a>. PEANuT
adds a small, neural net (so called weight-aware neural tweakers) to the
base model. Compared to LoRA, this increases expressivity for the same
trainable parameter count or allows to greatly lower the parameter count
without sacrificing expressivity. This comes at the expensive of a
higher memory requirement for the same parameter count and decreased
speed.</p>
<h3>TinyLoRA</h3>
<p>We have another serial contributor in <a
href="https://github.com/kashif"><code>@kashif</code></a>, who also
contributed <a href="https://arxiv.org/abs/2602.04118">TinyLoRA:
"Learning to Reason in 13 Parameters"</a> in <a
href="https://redirect.github.com/huggingface/peft/issues/3024">#3024</a>.
This is a PEFT method that allows to train an extremely small number of
parameters, much lower than what could be achieved even with LoRA rank
1. The paper shows that in particular with reinforcement learning, it
can often be enough to train just a few parameters to achieve good
results.</p>
<h3>AdaMSS</h3>
<p><a
href="https://github.com/LonglongaaaGo"><code>@LonglongaaaGo</code></a>
added <a
href="https://neurips.cc/virtual/2025/loc/san-diego/poster/119606">"AdaMSS:
Adaptive Multi-Subspace Approach for Parameter-Efficient
Fine-Tuning"</a> to PEFT. This method segments the base weights of
the model into smaller subspaces that are targeted for fine-tuning.
Moreover, it's possible to dynamically assign a lower parameter budget
to less important subspaces during training, similar to what <a
href="https://huggingface.co/docs/peft/package_reference/adalora">AdaLoRA</a>
does. This promises to provide higher expressiveness and better
generalization than similar PEFT methods.</p>
<h2>Enhancements</h2>
<h3>Convert non-LoRA adapters to LoRA</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/huggingface/peft/commit/6d5a6f4f2f902dbf13d21d2661d57c3c05df1dae"><code>6d5a6f4</code></a>
Release 0.19.0 (<a
href="https://redirect.github.com/huggingface/peft/issues/3155">#3155</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/076214c61f690898509b97702b5e9d95c826f000"><code>076214c</code></a>
FIX Explicit weight conversion map for Mixtral (<a
href="https://redirect.github.com/huggingface/peft/issues/3146">#3146</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/b386d5926c61d874eff64e6312de98d56ef1aa3d"><code>b386d59</code></a>
ENH Support models with low precision float dtypes (<a
href="https://redirect.github.com/huggingface/peft/issues/3055">#3055</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/cf9709c5a6d085f34b98727050109d267c342f0a"><code>cf9709c</code></a>
FIX Correct scaling with DARE merging (<a
href="https://redirect.github.com/huggingface/peft/issues/3152">#3152</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/efe0fe6acd72cb3bf1ebfc807c159bf0b9481f5e"><code>efe0fe6</code></a>
Bump the third-party-actions group with 8 updates (<a
href="https://redirect.github.com/huggingface/peft/issues/3125">#3125</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/07a1db6f29086efe0abdc2c296ef455da0412188"><code>07a1db6</code></a>
ENH Checkpoint saving with Tensor Parallel (<a
href="https://redirect.github.com/huggingface/peft/issues/3096">#3096</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/f62f54b66b640c030e315bfe1ff340fe16c6c7af"><code>f62f54b</code></a>
TST Enable arrow xpu tests (<a
href="https://redirect.github.com/huggingface/peft/issues/3145">#3145</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/98465930f7c9666ff952f4c67893620a9ef1e2c3"><code>9846593</code></a>
CI Move slow EVA tests to nightly GPU CI (<a
href="https://redirect.github.com/huggingface/peft/issues/3108">#3108</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/12d872a0ac091beba4f54800e3827f2b3cb478f2"><code>12d872a</code></a>
FIX CI Remove invalid arg in nightly GPU test call (<a
href="https://redirect.github.com/huggingface/peft/issues/3104">#3104</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/9e86c043f39d6b931b5fc63f14761ce0fd878505"><code>9e86c04</code></a>
DOC: Section on weight tying with LoRA (<a
href="https://redirect.github.com/huggingface/peft/issues/3066">#3066</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/huggingface/peft/compare/v0.18.1...v0.19.0">compare
view</a></li>
</ul>
</details>
<br />
Updates `pydantic` from 2.12.5 to 2.13.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pydantic/pydantic/releases">pydantic's
releases</a>.</em></p>
<blockquote>
<h2>v2.13.0 2026-04-13</h2>
<h2>v2.13.0 (2026-04-13)</h2>
<p>The highlights of the v2.13 release are available in the <a
href="https://pydantic.dev/articles/pydantic-v2-13-release">blog
post</a>.
Several minor changes (considered non-breaking changes according to our
<a
href="https://pydantic.dev/docs/validation/2.13/get-started/version-policy/#pydantic-v2">versioning
policy</a>) are also included in this release. Make sure to look into
them before upgrading.</p>
<p>This release contains the updated <code>pydantic.v1</code> namespace,
matching version 1.10.26 which includes support for Python 3.14.</p>
<h3>What's Changed</h3>
<p>See the beta releases for all changes sinces 2.12.</p>
<h4>Packaging</h4>
<ul>
<li>Add zizmor for GitHub Actions workflow linting by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13039">#13039</a></li>
<li>Update jiter to v0.14.0 to fix a segmentation fault on musl Linux by
<a href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13064">#13064</a></li>
</ul>
<h4>New Features</h4>
<ul>
<li>Allow default factories of private attributes to take validated
model data by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13013">#13013</a></li>
</ul>
<h4>Changes</h4>
<ul>
<li>Warn when serializing fixed length tuples with too few items by <a
href="https://github.com/arvindsaripalli"><code>@arvindsaripalli</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13016">#13016</a></li>
</ul>
<h4>Fixes</h4>
<ul>
<li>Change type of <code>Any</code> when synthesizing
<code>_build_sources</code> for <code>BaseSettings.__init__()</code>
signature in the mypy plugin by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13049">#13049</a></li>
<li>Fix model equality when using runtime <code>extra</code>
configuration by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13062">#13062</a></li>
</ul>
<h3>New Contributors</h3>
<ul>
<li><a
href="https://github.com/arvindsaripalli"><code>@arvindsaripalli</code></a>
made their first contribution in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13016">#13016</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/pydantic/pydantic/compare/v2.12.0...v2.13.0">https://github.com/pydantic/pydantic/compare/v2.12.0...v2.13.0</a></p>
<h2>v2.13.0b3 2026-03-31</h2>
<!-- raw HTML omitted -->
<h2>What's Changed</h2>
<h3>Packaging</h3>
<ul>
<li>Add riscv64 build target for manylinux by <a
href="https://github.com/boosterl"><code>@boosterl</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/12723">#12723</a></li>
</ul>
<h3>New Features</h3>
<ul>
<li>Add <code>ascii_only</code> option to <code>StringConstraints</code>
by <a
href="https://github.com/ai-man-codes"><code>@ai-man-codes</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/12907">#12907</a></li>
<li>Support <code>exclude_if</code> in computed fields by <a
href="https://github.com/andresliszt"><code>@andresliszt</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic/pull/12748">#12748</a></li>
<li>Push down constraints in unions involving <code>MISSING</code>
sentinel by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/12908">#12908</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pydantic/pydantic/blob/main/HISTORY.md">pydantic's
changelog</a>.</em></p>
<blockquote>
<h2>v2.13.0 (2026-04-13)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.13.0">GitHub
release</a></p>
<p>The highlights of the v2.13 release are available in the <a
href="https://pydantic.dev/articles/pydantic-v2-13-release">blog
post</a>.
Several minor changes (considered non-breaking changes according to our
<a
href="https://pydantic.dev/docs/validation/2.13/get-started/version-policy/#pydantic-v2">versioning
policy</a>)
are also included in this release. Make sure to look into them before
upgrading.</p>
<p>This release contains the updated <code>pydantic.v1</code> namespace,
matching version 1.10.26 which includes support for Python 3.14.</p>
<h3>What's Changed</h3>
<p>See the beta releases for all changes sinces 2.12.</p>
<h4>New Features</h4>
<ul>
<li>Allow default factories of private attributes to take validated
model data by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13013">#13013</a></li>
</ul>
<h4>Changes</h4>
<ul>
<li>Warn when serializing fixed length tuples with too few items by <a
href="https://github.com/arvindsaripalli"><code>@arvindsaripalli</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13016">#13016</a></li>
</ul>
<h4>Fixes</h4>
<ul>
<li>Change type of <code>Any</code> when synthesizing
<code>_build_sources</code> for <code>BaseSettings.__init__()</code>
signature in the mypy plugin by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13049">#13049</a></li>
<li>Fix model equality when using runtime <code>extra</code>
configuration by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13062">#13062</a></li>
</ul>
<h4>Packaging</h4>
<ul>
<li>Add zizmor for GitHub Actions workflow linting by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13039">#13039</a></li>
<li>Update jiter to v0.14.0 to fix a segmentation fault on musl Linux by
<a href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13064">#13064</a></li>
</ul>
<h3>New Contributors</h3>
<ul>
<li><a
href="https://github.com/arvindsaripalli"><code>@arvindsaripalli</code></a>
made their first contribution in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13016">#13016</a></li>
</ul>
<h2>v2.13.0b3 (2026-03-31)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.13.0b3">GitHub
release</a></p>
<h3>What's Changed</h3>
<h4>New Features</h4>
<ul>
<li>Add <code>ascii_only</code> option to <code>StringConstraints</code>
by <a
href="https://github.com/ai-man-codes"><code>@ai-man-codes</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/12907">#12907</a></li>
<li>Support <code>exclude_if</code> in computed fields by <a
href="https://github.com/andresliszt"><code>@andresliszt</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic/pull/12748">#12748</a></li>
<li>Push down constraints in unions involving <code>MISSING</code>
sentinel by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/12908">#12908</a></li>
</ul>
<h4>Changes</h4>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/pydantic/pydantic/commit/46bf4fa648af3a1fbf4603a37f210e9d9c618357"><code>46bf4fa</code></a>
Fix Pydantic release workflow (<a
href="https://redirect.github.com/pydantic/pydantic/issues/13067">#13067</a>)</li>
<li><a
href="https://github.com/pydantic/pydantic/commit/1b359edab09c623464d23c6fd2503ae5ff276d43"><code>1b359ed</code></a>
Prepare release v2.13.0 (<a
href="https://redirect.github.com/pydantic/pydantic/issues/13065">#13065</a>)</li>
<li><a
href="https://github.com/pydantic/pydantic/commit/b1bf19445d8ac144a7a0e82674d2d87eebab6c18"><code>b1bf194</code></a>
Fix model equality when using runtime <code>extra</code> configuration
(<a
href="https://redirect.github.com/pydantic/pydantic/issues/13062">#13062</a>)</li>
<li><a
href="https://github.com/pydantic/pydantic/commit/17a35e371bdff348c0690651d324c91fc7c9ff9e"><code>17a35e3</code></a>
Update jiter to v0.14.0 (<a
href="https://redirect.github.com/pydantic/pydantic/issues/13064">#13064</a>)</li>
<li><a
href="https://github.com/pydantic/pydantic/commit/feea402b23fa23774669908c4e08a61ba1e4238e"><code>feea402</code></a>
Use <code>simulation</code> mode in Codspeed CI (<a
href="https://redirect.github.com/pydantic/pydantic/issues/13063">#13063</a>)</li>
<li><a
href="https://github.com/pydantic/pydantic/commit/671c9b0d4d3f9b2f1b95ca32ac85cb69e824e0bc"><code>671c9b0</code></a>
Add basic benchmarks for model equality (<a
href="https://redirect.github.com/pydantic/pydantic/issues/13061">#13061</a>)</li>
<li><a
href="https://github.com/pydantic/pydantic/commit/d17d71e00a35f190b27321aa6f8f2a03139c00b8"><code>d17d71e</code></a>
Bump cryptography from 46.0.6 to 46.0.7 (<a
href="https://redirect.github.com/pydantic/pydantic/issues/13056">#13056</a>)</li>
<li><a
href="https://github.com/pydantic/pydantic/commit/919d61ac419af5151b673a90b65c9a12631091cf"><code>919d61a</code></a>
👥 Update Pydantic People (<a
href="https://redirect.github.com/pydantic/pydantic/issues/13059">#13059</a>)</li>
<li><a
href="https://github.com/pydantic/pydantic/commit/e7cf5dcb939ea98511e669b647c0273667a1b08a"><code>e7cf5dc</code></a>
Fix people workflow (<a
href="https://redirect.github.com/pydantic/pydantic/issues/13047">#13047</a>)</li>
<li><a
href="https://github.com/pydantic/pydantic/commit/2a806ad09b984fcc43568191aba5d965350995a0"><code>2a806ad</code></a>
Add regression test for <code>MISSING</code> sentinel serialization with
subclasses (<a
href="https://redirect.github.com/pydantic/pydantic/issues/13">#13</a>...</li>
<li>Additional commits viewable in <a
href="https://github.com/pydantic/pydantic/compare/v2.12.5...v2.13.0">compare
view</a></li>
</ul>
</details>
<br />
Updates `peft` from 0.18.1 to 0.19.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/huggingface/peft/releases">peft's
releases</a>.</em></p>
<blockquote>
<h2>v0.19.0</h2>
<h1>Highlights</h1>
<p>This PEFT release contains no less than nine new PEFT methods,
described below. It also contains numerous enhancements that should make
PEFT more useful to many users.</p>
<!-- raw HTML omitted -->
<h2>New Methods</h2>
<h3>GraLoRA</h3>
<p><a
href="https://github.com/yeonjoon-jung01"><code>@yeonjoon-jung01</code></a>
added <a href="https://arxiv.org/abs/2505.20355">"GraLoRA: Granular
Low-Rank Adaptation for Parameter-Efficient Fine-Tuning"</a> to
PEFT (<a
href="https://redirect.github.com/huggingface/peft/issues/2851">#2851</a>).
This method subdivides the base weight into smaller blocks and applies
LoRA to those. This more granular adaptation promises to increase
expressiveness and improve performance, especially at higher ranks
(64+), closing the gap to full fine-tuning.</p>
<h3>BD-LoRA</h3>
<p><a href="https://github.com/Conzel"><code>@Conzel</code></a>
contributed BD-LoRA: <a
href="https://openreview.net/forum?id=1cjLvtFOmL">"Block-Diagonal
LoRA for Eliminating Communication Overhead in Tensor Parallel LoRA
Serving"</a> (<a
href="https://redirect.github.com/huggingface/peft/issues/2895">#2895</a>).
With BD-LoRA, the LoRA weights are implemented in a block-diagonal way.
This allows to reduce communication overhead when using tensor
parallelism (TP) and thus faster serving.</p>
<p>There is an experiment branch for BD-LoRA support in vLLM: <a
href="https://redirect.github.com/vllm-project/vllm/issues/28136">vllm-project/vllm#28136</a>.</p>
<h3>Cartridges</h3>
<p>Thanks to <a
href="https://github.com/kashif"><code>@kashif</code></a>, PEFT now
also supports <a href="https://arxiv.org/abs/2506.06266">Cartridges</a>
(<a
href="https://redirect.github.com/huggingface/peft/issues/2953">#2953</a>).
The main purpose of this method is to train a prefix to <a
href="https://hazyresearch.stanford.edu/blog/2025-06-08-cartridges">compress
a long context to a short size</a> and thus save on tokens. On a low
level, this is similar to <a
href="https://huggingface.co/docs/peft/package_reference/prefix_tuning">prefix
tuning</a>. The PR also added an <a
href="https://github.com/huggingface/peft/tree/main/examples/cartridge_self_study">example
recipe</a> to quickly get started.</p>
<h3>PVeRA</h3>
<p><a href="https://arxiv.org/abs/2512.07703">"PVeRA: Probabilistic
Vector-Based Random Matrix Adaptation"</a> was added to PEFT by <a
href="https://github.com/leofillioux"><code>@leofillioux</code></a> in
<a
href="https://redirect.github.com/huggingface/peft/issues/2952">#2952</a>.
It is an extension of <a
href="https://huggingface.co/docs/peft/package_reference/vera">VeRA</a>,
a PEFT method that uses weight sharing between layers to be especially
parameter efficient. PVeRA builds on top of that by adding a
probabilistic element, sampling from the shared parameters and promising
better performance overall.</p>
<h3>PSOFT</h3>
<p><a href="https://github.com/fei407"><code>@fei407</code></a> added
PSOFT, <a
href="https://openreview.net/forum?id=FSHrinMArK">"Efficient
Orthogonal Fine-Tuning with Principal Subspace Adaptation"</a>, to
PEFT in <a
href="https://redirect.github.com/huggingface/peft/issues/3037">#3037</a>.
Orthogonal fine-tuning techniques like <a
href="https://huggingface.co/docs/peft/package_reference/oft">OFT</a>
and <a
href="https://huggingface.co/docs/peft/package_reference/boft">BOFT</a>
are good at preserving the structure and thus capabilities of the
underlying base model. PSOFT improves efficiency of this technique by
constraining the adaptation to low-rank principal subspace.</p>
<h3>Lily</h3>
<p><a href="https://github.com/yibozhong"><code>@yibozhong</code></a>
added Lily: <a href="https://arxiv.org/abs/2407.09946">"Low-Rank
Interconnected Adaptation across Layers"</a> to PEFT in <a
href="https://redirect.github.com/huggingface/peft/issues/2563">#2563</a>.
Lily is on the surface similar to LoRA but has a sophisticated parameter
sharing scheme. The A parameters are shared blockwise (e.g. 4
consecutive q_proj layers share the same A). There is a pool of B
parameters that is shared globally, the actual B's are chosen in a
data-dependent way through a router. This allows Lily to use higher
ranks than LoRA while maintaining a low trainable parameter count.</p>
<h3>PEANuT</h3>
<p>In <a
href="https://redirect.github.com/huggingface/peft/issues/3084">#3084</a>,
<a href="https://arxiv.org/abs/2410.01870">"PEANuT:
Parameter-Efficient Adaptation with Weight-aware Neural
Tweakers"</a> was added to PEFT, again by <a
href="https://github.com/yibozhong"><code>@yibozhong</code></a>. PEANuT
adds a small, neural net (so called weight-aware neural tweakers) to the
base model. Compared to LoRA, this increases expressivity for the same
trainable parameter count or allows to greatly lower the parameter count
without sacrificing expressivity. This comes at the expensive of a
higher memory requirement for the same parameter count and decreased
speed.</p>
<h3>TinyLoRA</h3>
<p>We have another serial contributor in <a
href="https://github.com/kashif"><code>@kashif</code></a>, who also
contributed <a href="https://arxiv.org/abs/2602.04118">TinyLoRA:
"Learning to Reason in 13 Parameters"</a> in <a
href="https://redirect.github.com/huggingface/peft/issues/3024">#3024</a>.
This is a PEFT method that allows to train an extremely small number of
parameters, much lower than what could be achieved even with LoRA rank
1. The paper shows that in particular with reinforcement learning, it
can often be enough to train just a few parameters to achieve good
results.</p>
<h3>AdaMSS</h3>
<p><a
href="https://github.com/LonglongaaaGo"><code>@LonglongaaaGo</code></a>
added <a
href="https://neurips.cc/virtual/2025/loc/san-diego/poster/119606">"AdaMSS:
Adaptive Multi-Subspace Approach for Parameter-Efficient
Fine-Tuning"</a> to PEFT. This method segments the base weights of
the model into smaller subspaces that are targeted for fine-tuning.
Moreover, it's possible to dynamically assign a lower parameter budget
to less important subspaces during training, similar to what <a
href="https://huggingface.co/docs/peft/package_reference/adalora">AdaLoRA</a>
does. This promises to provide higher expressiveness and better
generalization than similar PEFT methods.</p>
<h2>Enhancements</h2>
<h3>Convert non-LoRA adapters to LoRA</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/huggingface/peft/commit/6d5a6f4f2f902dbf13d21d2661d57c3c05df1dae"><code>6d5a6f4</code></a>
Release 0.19.0 (<a
href="https://redirect.github.com/huggingface/peft/issues/3155">#3155</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/076214c61f690898509b97702b5e9d95c826f000"><code>076214c</code></a>
FIX Explicit weight conversion map for Mixtral (<a
href="https://redirect.github.com/huggingface/peft/issues/3146">#3146</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/b386d5926c61d874eff64e6312de98d56ef1aa3d"><code>b386d59</code></a>
ENH Support models with low precision float dtypes (<a
href="https://redirect.github.com/huggingface/peft/issues/3055">#3055</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/cf9709c5a6d085f34b98727050109d267c342f0a"><code>cf9709c</code></a>
FIX Correct scaling with DARE merging (<a
href="https://redirect.github.com/huggingface/peft/issues/3152">#3152</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/efe0fe6acd72cb3bf1ebfc807c159bf0b9481f5e"><code>efe0fe6</code></a>
Bump the third-party-actions group with 8 updates (<a
href="https://redirect.github.com/huggingface/peft/issues/3125">#3125</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/07a1db6f29086efe0abdc2c296ef455da0412188"><code>07a1db6</code></a>
ENH Checkpoint saving with Tensor Parallel (<a
href="https://redirect.github.com/huggingface/peft/issues/3096">#3096</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/f62f54b66b640c030e315bfe1ff340fe16c6c7af"><code>f62f54b</code></a>
TST Enable arrow xpu tests (<a
href="https://redirect.github.com/huggingface/peft/issues/3145">#3145</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/98465930f7c9666ff952f4c67893620a9ef1e2c3"><code>9846593</code></a>
CI Move slow EVA tests to nightly GPU CI (<a
href="https://redirect.github.com/huggingface/peft/issues/3108">#3108</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/12d872a0ac091beba4f54800e3827f2b3cb478f2"><code>12d872a</code></a>
FIX CI Remove invalid arg in nightly GPU test call (<a
href="https://redirect.github.com/huggingface/peft/issues/3104">#3104</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/9e86c043f39d6b931b5fc63f14761ce0fd878505"><code>9e86c04</code></a>
DOC: Section on weight tying with LoRA (<a
href="https://redirect.github.com/huggingface/peft/issues/3066">#3066</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/huggingface/peft/compare/v0.18.1...v0.19.0">compare
view</a></li>
</ul>
</details>
<br />
Updates `pydantic` from 2.12.5 to 2.13.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pydantic/pydantic/releases">pydantic's
releases</a>.</em></p>
<blockquote>
<h2>v2.13.0 2026-04-13</h2>
<h2>v2.13.0 (2026-04-13)</h2>
<p>The highlights of the v2.13 release are available in the <a
href="https://pydantic.dev/articles/pydantic-v2-13-release">blog
post</a>.
Several minor changes (considered non-breaking changes according to our
<a
href="https://pydantic.dev/docs/validation/2.13/get-started/version-policy/#pydantic-v2">versioning
policy</a>) are also included in this release. Make sure to look into
them before upgrading.</p>
<p>This release contains the updated <code>pydantic.v1</code> namespace,
matching version 1.10.26 which includes support for Python 3.14.</p>
<h3>What's Changed</h3>
<p>See the beta releases for all changes sinces 2.12.</p>
<h4>Packaging</h4>
<ul>
<li>Add zizmor for GitHub Actions workflow linting by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13039">#13039</a></li>
<li>Update jiter to v0.14.0 to fix a segmentation fault on musl Linux by
<a href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13064">#13064</a></li>
</ul>
<h4>New Features</h4>
<ul>
<li>Allow default factories of private attributes to take validated
model data by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13013">#13013</a></li>
</ul>
<h4>Changes</h4>
<ul>
<li>Warn when serializing fixed length tuples with too few items by <a
href="https://github.com/arvindsaripalli"><code>@arvindsaripalli</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13016">#13016</a></li>
</ul>
<h4>Fixes</h4>
<ul>
<li>Change type of <code>Any</code> when synthesizing
<code>_build_sources</code> for <code>BaseSettings.__init__()</code>
signature in the mypy plugin by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13049">#13049</a></li>
<li>Fix model equality when using runtime <code>extra</code>
configuration by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13062">#13062</a></li>
</ul>
<h3>New Contributors</h3>
<ul>
<li><a
href="https://github.com/arvindsaripalli"><code>@arvindsaripalli</code></a>
made their first contribution in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13016">#13016</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/pydantic/pydantic/compare/v2.12.0...v2.13.0">https://github.com/pydantic/pydantic/compare/v2.12.0...v2.13.0</a></p>
<h2>v2.13.0b3 2026-03-31</h2>
<!-- raw HTML omitted -->
<h2>What's Changed</h2>
<h3>Packaging</h3>
<ul>
<li>Add riscv64 build target for manylinux by <a
href="https://github.com/boosterl"><code>@boosterl</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/12723">#12723</a></li>
</ul>
<h3>New Features</h3>
<ul>
<li>Add <code>ascii_only</code> option to <code>StringConstraints</code>
by <a
href="https://github.com/ai-man-codes"><code>@ai-man-codes</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/12907">#12907</a></li>
<li>Support <code>exclude_if</code> in computed fields by <a
href="https://github.com/andresliszt"><code>@andresliszt</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic/pull/12748">#12748</a></li>
<li>Push down constraints in unions involving <code>MISSING</code>
sentinel by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/12908">#12908</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pydantic/pydantic/blob/main/HISTORY.md">pydantic's
changelog</a>.</em></p>
<blockquote>
<h2>v2.13.0 (2026-04-13)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.13.0">GitHub
release</a></p>
<p>The highlights of the v2.13 release are available in the <a
href="https://pydantic.dev/articles/pydantic-v2-13-release">blog
post</a>.
Several minor changes (considered non-breaking changes according to our
<a
href="https://pydantic.dev/docs/validation/2.13/get-started/version-policy/#pydantic-v2">versioning
policy</a>)
are also included in this release. Make sure to look into them before
upgrading.</p>
<p>This release contains the updated <code>pydantic.v1</code> namespace,
matching version 1.10.26 which includes support for Python 3.14.</p>
<h3>What's Changed</h3>
<p>See the beta releases for all changes sinces 2.12.</p>
<h4>New Features</h4>
<ul>
<li>Allow default factories of private attributes to take validated
model data by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13013">#13013</a></li>
</ul>
<h4>Changes</h4>
<ul>
<li>Warn when serializing fixed length tuples with too few items by <a
href="https://github.com/arvindsaripalli"><code>@arvindsaripalli</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13016">#13016</a></li>
</ul>
<h4>Fixes</h4>
<ul>
<li>Change type of <code>Any</code> when synthesizing
<code>_build_sources</code> for <code>BaseSettings.__init__()</code>
signature in the mypy plugin by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13049">#13049</a></li>
<li>Fix model equality when using runtime <code>extra</code>
configuration by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13062">#13062</a></li>
</ul>
<h4>Packaging</h4>
<ul>
<li>Add zizmor for GitHub Actions workflow linting by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13039">#13039</a></li>
<li>Update jiter to v0.14.0 to fix a segmentation fault on musl Linux by
<a href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13064">#13064</a></li>
</ul>
<h3>New Contributors</h3>
<ul>
<li><a
href="https://github.com/arvindsaripalli"><code>@arvindsaripalli</code></a>
made their first contribution in <a
href="https://redirect.github.com/pydantic/pydantic/pull/13016">#13016</a></li>
</ul>
<h2>v2.13.0b3 (2026-03-31)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.13.0b3">GitHub
release</a></p>
<h3>What's Changed</h3>
<h4>New Features</h4>
<ul>
<li>Add <code>ascii_only</code> option to <code>StringConstraints</code>
by <a
href="https://github.com/ai-man-codes"><code>@ai-man-codes</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/12907">#12907</a></li>
<li>Support <code>exclude_if</code> in computed fields by <a
href="https://github.com/andresliszt"><code>@andresliszt</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic/pull/12748">#12748</a></li>
<li>Push down constraints in unions involving <code>MISSING</code>
sentinel by <a
href="https://github.com/Viicos"><code>@Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/12908">#12908</a></li>
</ul>
<h4>Changes</h4>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/pydantic/pydantic/commit/46bf4fa648af3a1fbf4603a37f210e9d9c618357"><code>46bf4fa</code></a>
Fix Pydantic release workflow (<a
href="https://redirect.github.com/pydantic/pydantic/issues/13067">#13067</a>)</li>
<li><a
href="https://github.com/pydantic/pydantic/commit/1b359edab09c623464d23c6fd2503ae5ff276d43"><code>1b359ed</code></a>
Prepare release v2.13.0 (<a
href="https://redirect.github.com/pydantic/pydantic/issues/13065">#13065</a>)</li>
<li><a
href="https://github.com/pydantic/pydantic/commit/b1bf19445d8ac144a7a0e82674d2d87eebab6c18"><code>b1bf194</code></a>
Fix model equality when using runtime <code>extra</code> configuration
(<a
href="https://redirect.github.com/pydantic/pydantic/issues/13062">#13062</a>)</li>
<li><a
href="https://github.com/pydantic/pydantic/commit/17a35e371bdff348c0690651d324c91fc7c9ff9e"><code>17a35e3</code></a>
Update jiter to v0.14.0 (<a
href="https://redirect.github.com/pydantic/pydantic/issues/13064">#13064</a>)</li>
<li><a
href="https://github.com/pydantic/pydantic/commit/feea402b23fa23774669908c4e08a61ba1e4238e"><code>feea402</code></a>
Use <code>simulation</code> mode in Codspeed CI (<a
href="https://redirect.github.com/pydantic/pydantic/issues/13063">#13063</a>)</li>
<li><a
href="https://github.com/pydantic/pydantic/commit/671c9b0d4d3f9b2f1b95ca32ac85cb69e824e0bc"><code>671c9b0</code></a>
Add basic benchmarks for model equality (<a
href="https://redirect.github.com/pydantic/pydantic/issues/13061">#13061</a>)</li>
<li><a
href="https://github.com/pydantic/pydantic/commit/d17d71e00a35f190b27321aa6f8f2a03139c00b8"><code>d17d71e</code></a>
Bump cryptography from 46.0.6 to 46.0.7 (<a
href="https://redirect.github.com/pydantic/pydantic/issues/13056">#13056</a>)</li>
<li><a
href="https://github.com/pydantic/pydantic/commit/919d61ac419af5151b673a90b65c9a12631091cf"><code>919d61a</code></a>
👥 Update Pydantic People (<a
href="https://redirect.github.com/pydantic/pydantic/issues/13059">#13059</a>)</li>
<li><a
href="https://github.com/pydantic/pydantic/commit/e7cf5dcb939ea98511e669b647c0273667a1b08a"><code>e7cf5dc</code></a>
Fix people workflow (<a
href="https://redirect.github.com/pydantic/pydantic/issues/13047">#13047</a>)</li>
<li><a
href="https://github.com/pydantic/pydantic/commit/2a806ad09b984fcc43568191aba5d965350995a0"><code>2a806ad</code></a>
Add regression test for <code>MISSING</code> sentinel serialization with
subclasses (<a
href="https://redirect.github.com/pydantic/pydantic/issues/13">#13</a>...</li>
<li>Additional commits viewable in <a
href="https://github.com/pydantic/pydantic/compare/v2.12.5...v2.13.0">compare
view</a></li>
</ul>
</details>
<br />
Updates `pytest` from 9.0.2 to 9.0.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pytest-dev/pytest/releases">pytest's
releases</a>.</em></p>
<blockquote>
<h2>9.0.3</h2>
<h1>pytest 9.0.3 (2026-04-07)</h1>
<h2>Bug fixes</h2>
<ul>
<li>
<p><a
href="https://redirect.github.com/pytest-dev/pytest/issues/12444">#12444</a>:
Fixed <code>pytest.approx</code> which now correctly takes into account
<code>~collections.abc.Mapping</code> keys order to compare them.</p>
</li>
<li>
<p><a
href="https://redirect.github.com/pytest-dev/pytest/issues/13634">#13634</a>:
Blocking a <code>conftest.py</code> file using the <code>-p no:</code>
option is now explicitly disallowed.</p>
<p>Previously this resulted in an internal assertion failure during
plugin loading.</p>
<p>Pytest now raises a clear <code>UsageError</code> explaining that
conftest files are not plugins and cannot be disabled via
<code>-p</code>.</p>
</li>
<li>
<p><a
href="https://redirect.github.com/pytest-dev/pytest/issues/13734">#13734</a>:
Fixed crash when a test raises an exceptiongroup with
<code>__tracebackhide__ = True</code>.</p>
</li>
<li>
<p><a
href="https://redirect.github.com/pytest-dev/pytest/issues/14195">#14195</a>:
Fixed an issue where non-string messages passed to <!-- raw HTML omitted
-->unittest.TestCase.subTest()<!-- raw HTML omitted --> were not
printed.</p>
</li>
<li>
<p><a
href="https://redirect.github.com/pytest-dev/pytest/issues/14343">#14343</a>:
Fixed use of insecure temporary directory (CVE-2025-71176).</p>
</li>
</ul>
<h2>Improved documentation</h2>
<ul>
<li><a
href="https://redirect.github.com/pytest-dev/pytest/issues/13388">#13388</a>:
Clarified documentation for <code>-p</code> vs
<code>PYTEST_PLUGINS</code> plugin loading and fixed an incorrect
<code>-p</code> example.</li>
<li><a
href="https://redirect.github.com/pytest-dev/pytest/issues/13731">#13731</a>:
Clarified that capture fixtures (e.g. <code>capsys</code> and
<code>capfd</code>) take precedence over the <code>-s</code> /
<code>--capture=no</code> command-line options in <code>Accessing
captured output from a test function
<accessing-captured-output></code>.</li>
<li><a
href="https://redirect.github.com/pytest-dev/pytest/issues/14088">#14088</a>:
Clarified that the default <code>pytest_collection</code> hook sets
<code>session.items</code> before it calls
<code>pytest_collection_finish</code>, not after.</li>
<li><a
href="https://redirect.github.com/pytest-dev/pytest/issues/14255">#14255</a>:
TOML integer log levels must be quoted: Updating reference
documentation.</li>
</ul>
<h2>Contributor-facing changes</h2>
<ul>
<li>
<p><a
href="https://redirect.github.com/pytest-dev/pytest/issues/12689">#12689</a>:
The test reports are now published to Codecov from GitHub Actions.
The test statistics is visible <a
href="https://app.codecov.io/gh/pytest-dev/pytest/tests">on the web
interface</a>.</p>
<p>-- by <code>aleguy02</code></p>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/pytest-dev/pytest/commit/a7d58d7a21b78581e636bbbdea13c66ad1657c1e"><code>a7d58d7</code></a>
Prepare release version 9.0.3</li>
<li><a
href="https://github.com/pytest-dev/pytest/commit/089d98199c253d8f89a040243bc4f2aa6cd5ab22"><code>089d981</code></a>
Merge pull request <a
href="https://redirect.github.com/pytest-dev/pytest/issues/14366">#14366</a>
from bluetech/revert-14193-backport</li>
<li><a
href="https://github.com/pytest-dev/pytest/commit/8127eaf4ab7f6b2fdd0dc1b38343ec97aeef05ac"><code>8127eaf</code></a>
Revert "Fix: assertrepr_compare respects dict insertion order (<a
href="https://redirect.github.com/pytest-dev/pytest/issues/14050">#14050</a>)
(<a
href="https://redirect.github.com/pytest-dev/pytest/issues/14193">#14193</a>)"</li>
<li><a
href="https://github.com/pytest-dev/pytest/commit/99a7e6029e7a6e8d53e5df114b1346e035370241"><code>99a7e60</code></a>
Merge pull request <a
href="https://redirect.github.com/pytest-dev/pytest/issues/14363">#14363</a>
from pytest-dev/patchback/backports/9.0.x/95d8423bd...</li>
<li><a
href="https://github.com/pytest-dev/pytest/commit/ddee02a578da30dd43aedc39c1c1f1aaadfcee95"><code>ddee02a</code></a>
Merge pull request <a
href="https://redirect.github.com/pytest-dev/pytest/issues/14343">#14343</a>
from bluetech/cve-2025-71176-simple</li>
<li><a
href="https://github.com/pytest-dev/pytest/commit/74eac6916fee34726cb194f16c516e96fbd29619"><code>74eac69</code></a>
doc: Update training info (<a
href="https://redirect.github.com/pytest-dev/pytest/issues/14298">#14298</a>)
(<a
href="https://redirect.github.com/pytest-dev/pytest/issues/14301">#14301</a>)</li>
<li><a
href="https://github.com/pytest-dev/pytest/commit/f92dee777cfdb77d1c43633d02766ddf1f07c869"><code>f92dee7</code></a>
Merge pull request <a
href="https://redirect.github.com/pytest-dev/pytest/issues/14267">#14267</a>
from pytest-dev/patchback/backports/9.0.x/d6fa26c62...</li>
<li><a
href="https://github.com/pytest-dev/pytest/commit/7ee58acc8777c31ac6cf388d01addf5a414a7439"><code>7ee58ac</code></a>
Merge pull request <a
href="https://redirect.github.com/pytest-dev/pytest/issues/12378">#12378</a>
from Pierre-Sassoulas/fix-implicit-str-concat-and-d...</li>
<li><a
href="https://github.com/pytest-dev/pytest/commit/37da870d37e3a2f5177cae075c7b9ae279432bf8"><code>37da870</code></a>
Merge pull request <a
href="https://redirect.github.com/pytest-dev/pytest/issues/14259">#14259</a>
from mitre88/patch-4 (<a
href="https://redirect.github.com/pytest-dev/pytest/issues/14268">#14268</a>)</li>
<li><a
href="https://github.com/pytest-dev/pytest/commit/c34bfa3b7acb65b594707c714f1d8461b0304eed"><code>c34bfa3</code></a>
Add explanation for string context diffs (<a
href="https://redirect.github.com/pytest-dev/pytest/issues/14257">#14257</a>)
(<a
href="https://redirect.github.com/pytest-dev/pytest/issues/14266">#14266</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pytest-dev/pytest/compare/9.0.2...9.0.3">compare
view</a></li>
</ul>
</details>
<br />
Updates `timm` from 1.0.25 to 1.0.26
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/huggingface/pytorch-image-models/releases">timm's
releases</a>.</em></p>
<blockquote>
<h2>Release v1.0.26</h2>
<h2>March 23, 2026</h2>
<ul>
<li>Improve pickle checkpoint handling security. Default all loading to
<code>weights_only=True</code>, add safe_global for ArgParse.</li>
<li>Improve attention mask handling for core ViT/EVA models &
layers. Resolve bool masks, pass <code>is_causal</code> through for SSL
tasks.</li>
<li>Fix class & register token uses with ViT and no pos embed
enabled.</li>
<li>Add Patch Representation Refinement (PRR) as a pooling option in
ViT. Thanks Sina (<a
href="https://github.com/sinahmr">https://github.com/sinahmr</a>).</li>
<li>Improve consistency of output projection / MLP dimensions for
attention pooling layers.</li>
<li>Hiera model F.SDPA optimization to allow Flash Attention kernel
use.</li>
<li>Caution added to SGDP optimizer.</li>
<li>Release 1.0.26. First maintenance release since my departure from
Hugging Face.</li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>fix: replace 5 bare except clauses with except Exception by <a
href="https://github.com/haosenwang1018"><code>@haosenwang1018</code></a>
in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2672">huggingface/pytorch-image-models#2672</a></li>
<li>Add timmx model export tool to README by <a
href="https://github.com/Boulaouaney"><code>@Boulaouaney</code></a> in
<a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2673">huggingface/pytorch-image-models#2673</a></li>
<li>Enhance SGDP optimizer with caution parameter by <a
href="https://github.com/Yuan-Jinghui"><code>@Yuan-Jinghui</code></a>
in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2675">huggingface/pytorch-image-models#2675</a></li>
<li>Fix CLS and Reg tokens usage when pos_embed is disabled by <a
href="https://github.com/sinahmr"><code>@sinahmr</code></a> in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2676">huggingface/pytorch-image-models#2676</a></li>
<li>default weights_only=True for load fns by <a
href="https://github.com/rwightman"><code>@rwightman</code></a> in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2679">huggingface/pytorch-image-models#2679</a></li>
<li>Fix Hiera global attention to use 4D tensors for efficient SDPA
dispatch by <a
href="https://github.com/Raiden129"><code>@Raiden129</code></a> in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2680">huggingface/pytorch-image-models#2680</a></li>
<li>Improve 2d and latent attention pool dimension handling. Fix <a
href="https://redirect.github.com/huggingface/pytorch-image-models/issues/2682">#2682</a>
by <a href="https://github.com/rwightman"><code>@rwightman</code></a>
in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2684">huggingface/pytorch-image-models#2684</a></li>
<li>Improve attention mask handling for vision_transformer and eva and
related blocks by <a
href="https://github.com/rwightman"><code>@rwightman</code></a> in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2686">huggingface/pytorch-image-models#2686</a></li>
<li>Implement PRR as a pooling module. Alternative to <a
href="https://redirect.github.com/huggingface/pytorch-image-models/issues/2678">#2678</a>
by <a href="https://github.com/rwightman"><code>@rwightman</code></a>
in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2685">huggingface/pytorch-image-models#2685</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/haosenwang1018"><code>@haosenwang1018</code></a>
made their first contribution in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2672">huggingface/pytorch-image-models#2672</a></li>
<li><a href="https://github.com/Raiden129"><code>@Raiden129</code></a>
made their first contribution in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2680">huggingface/pytorch-image-models#2680</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/huggingface/pytorch-image-models/compare/v1.0.25...v1.0.26">https://github.com/huggingface/pytorch-image-models/compare/v1.0.25...v1.0.26</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/8d0f79effa3dbc922afbfb431fbadd4648938de7"><code>8d0f79e</code></a>
Release 1.0.26</li>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/6e3fdda39508db30766f9d9e6ec32380ebee8b8c"><code>6e3fdda</code></a>
Implement PRR as a pooling module. Alternative to <a
href="https://redirect.github.com/huggingface/pytorch-image-models/issues/2678">#2678</a></li>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/8b4239c4d5770f93b11e2295ef0055285aa93901"><code>8b4239c</code></a>
Add comments for DinoV3 re global pool (class token). Fix <a
href="https://redirect.github.com/huggingface/pytorch-image-models/issues/2681">#2681</a></li>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/52e6d19d9dde65d1860e3d4151fb75fff038412c"><code>52e6d19</code></a>
Change avg_checkpoints.py to use more secure load helper</li>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/7a2f49bd49f204f53e301fd121011dffa51eff48"><code>7a2f49b</code></a>
Fix FX tracing on resolve_self_attn_mask</li>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/61a26c7707045e12ba780cfbcb61653d49e5e37f"><code>61a26c7</code></a>
Improve attention mask handling for vision_transformer and eva and
related bl...</li>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/3e8def86c480733a355eab96b6475918bc24d801"><code>3e8def8</code></a>
Improve 2d and latent attention pool dimension handling. Fix <a
href="https://redirect.github.com/huggingface/pytorch-image-models/issues/2682">#2682</a></li>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/a94c10fce182362e26e128e1b51863dff2a1d558"><code>a94c10f</code></a>
Update version.py</li>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/0c90043d23a3dc5ab7f67bef060bb922d26bf64d"><code>0c90043</code></a>
fix: branch Hiera MaskUnitAttention into 4D global path for
FlashAttention di...</li>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/a346c76b5f42a982c4c2108d7328ed9ae7b46465"><code>a346c76</code></a>
Further refine weights_only=True, add safe globals for argparse
Namespace to ...</li>
<li>Additional commits viewable in <a
href="https://github.com/huggingface/pytorch-image-models/compare/v1.0.25...v1.0.26">compare
view</a></li>
</ul>
</details>
<br />
Updates `peft` from 0.18.1 to 0.19.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/huggingface/peft/releases">peft's
releases</a>.</em></p>
<blockquote>
<h2>v0.19.0</h2>
<h1>Highlights</h1>
<p>This PEFT release contains no less than nine new PEFT methods,
described below. It also contains numerous enhancements that should make
PEFT more useful to many users.</p>
<!-- raw HTML omitted -->
<h2>New Methods</h2>
<h3>GraLoRA</h3>
<p><a
href="https://github.com/yeonjoon-jung01"><code>@yeonjoon-jung01</code></a>
added <a href="https://arxiv.org/abs/2505.20355">"GraLoRA: Granular
Low-Rank Adaptation for Parameter-Efficient Fine-Tuning"</a> to
PEFT (<a
href="https://redirect.github.com/huggingface/peft/issues/2851">#2851</a>).
This method subdivides the base weight into smaller blocks and applies
LoRA to those. This more granular adaptation promises to increase
expressiveness and improve performance, especially at higher ranks
(64+), closing the gap to full fine-tuning.</p>
<h3>BD-LoRA</h3>
<p><a href="https://github.com/Conzel"><code>@Conzel</code></a>
contributed BD-LoRA: <a
href="https://openreview.net/forum?id=1cjLvtFOmL">"Block-Diagonal
LoRA for Eliminating Communication Overhead in Tensor Parallel LoRA
Serving"</a> (<a
href="https://redirect.github.com/huggingface/peft/issues/2895">#2895</a>).
With BD-LoRA, the LoRA weights are implemented in a block-diagonal way.
This allows to reduce communication overhead when using tensor
parallelism (TP) and thus faster serving.</p>
<p>There is an experiment branch for BD-LoRA support in vLLM: <a
href="https://redirect.github.com/vllm-project/vllm/issues/28136">vllm-project/vllm#28136</a>.</p>
<h3>Cartridges</h3>
<p>Thanks to <a
href="https://github.com/kashif"><code>@kashif</code></a>, PEFT now
also supports <a href="https://arxiv.org/abs/2506.06266">Cartridges</a>
(<a
href="https://redirect.github.com/huggingface/peft/issues/2953">#2953</a>).
The main purpose of this method is to train a prefix to <a
href="https://hazyresearch.stanford.edu/blog/2025-06-08-cartridges">compress
a long context to a short size</a> and thus save on tokens. On a low
level, this is similar to <a
href="https://huggingface.co/docs/peft/package_reference/prefix_tuning">prefix
tuning</a>. The PR also added an <a
href="https://github.com/huggingface/peft/tree/main/examples/cartridge_self_study">example
recipe</a> to quickly get started.</p>
<h3>PVeRA</h3>
<p><a href="https://arxiv.org/abs/2512.07703">"PVeRA: Probabilistic
Vector-Based Random Matrix Adaptation"</a> was added to PEFT by <a
href="https://github.com/leofillioux"><code>@leofillioux</code></a> in
<a
href="https://redirect.github.com/huggingface/peft/issues/2952">#2952</a>.
It is an extension of <a
href="https://huggingface.co/docs/peft/package_reference/vera">VeRA</a>,
a PEFT method that uses weight sharing between layers to be especially
parameter efficient. PVeRA builds on top of that by adding a
probabilistic element, sampling from the shared parameters and promising
better performance overall.</p>
<h3>PSOFT</h3>
<p><a href="https://github.com/fei407"><code>@fei407</code></a> added
PSOFT, <a
href="https://openreview.net/forum?id=FSHrinMArK">"Efficient
Orthogonal Fine-Tuning with Principal Subspace Adaptation"</a>, to
PEFT in <a
href="https://redirect.github.com/huggingface/peft/issues/3037">#3037</a>.
Orthogonal fine-tuning techniques like <a
href="https://huggingface.co/docs/peft/package_reference/oft">OFT</a>
and <a
href="https://huggingface.co/docs/peft/package_reference/boft">BOFT</a>
are good at preserving the structure and thus capabilities of the
underlying base model. PSOFT improves efficiency of this technique by
constraining the adaptation to low-rank principal subspace.</p>
<h3>Lily</h3>
<p><a href="https://github.com/yibozhong"><code>@yibozhong</code></a>
added Lily: <a href="https://arxiv.org/abs/2407.09946">"Low-Rank
Interconnected Adaptation across Layers"</a> to PEFT in <a
href="https://redirect.github.com/huggingface/peft/issues/2563">#2563</a>.
Lily is on the surface similar to LoRA but has a sophisticated parameter
sharing scheme. The A parameters are shared blockwise (e.g. 4
consecutive q_proj layers share the same A). There is a pool of B
parameters that is shared globally, the actual B's are chosen in a
data-dependent way through a router. This allows Lily to use higher
ranks than LoRA while maintaining a low trainable parameter count.</p>
<h3>PEANuT</h3>
<p>In <a
href="https://redirect.github.com/huggingface/peft/issues/3084">#3084</a>,
<a href="https://arxiv.org/abs/2410.01870">"PEANuT:
Parameter-Efficient Adaptation with Weight-aware Neural
Tweakers"</a> was added to PEFT, again by <a
href="https://github.com/yibozhong"><code>@yibozhong</code></a>. PEANuT
adds a small, neural net (so called weight-aware neural tweakers) to the
base model. Compared to LoRA, this increases expressivity for the same
trainable parameter count or allows to greatly lower the parameter count
without sacrificing expressivity. This comes at the expensive of a
higher memory requirement for the same parameter count and decreased
speed.</p>
<h3>TinyLoRA</h3>
<p>We have another serial contributor in <a
href="https://github.com/kashif"><code>@kashif</code></a>, who also
contributed <a href="https://arxiv.org/abs/2602.04118">TinyLoRA:
"Learning to Reason in 13 Parameters"</a> in <a
href="https://redirect.github.com/huggingface/peft/issues/3024">#3024</a>.
This is a PEFT method that allows to train an extremely small number of
parameters, much lower than what could be achieved even with LoRA rank
1. The paper shows that in particular with reinforcement learning, it
can often be enough to train just a few parameters to achieve good
results.</p>
<h3>AdaMSS</h3>
<p><a
href="https://github.com/LonglongaaaGo"><code>@LonglongaaaGo</code></a>
added <a
href="https://neurips.cc/virtual/2025/loc/san-diego/poster/119606">"AdaMSS:
Adaptive Multi-Subspace Approach for Parameter-Efficient
Fine-Tuning"</a> to PEFT. This method segments the base weights of
the model into smaller subspaces that are targeted for fine-tuning.
Moreover, it's possible to dynamically assign a lower parameter budget
to less important subspaces during training, similar to what <a
href="https://huggingface.co/docs/peft/package_reference/adalora">AdaLoRA</a>
does. This promises to provide higher expressiveness and better
generalization than similar PEFT methods.</p>
<h2>Enhancements</h2>
<h3>Convert non-LoRA adapters to LoRA</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/huggingface/peft/commit/6d5a6f4f2f902dbf13d21d2661d57c3c05df1dae"><code>6d5a6f4</code></a>
Release 0.19.0 (<a
href="https://redirect.github.com/huggingface/peft/issues/3155">#3155</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/076214c61f690898509b97702b5e9d95c826f000"><code>076214c</code></a>
FIX Explicit weight conversion map for Mixtral (<a
href="https://redirect.github.com/huggingface/peft/issues/3146">#3146</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/b386d5926c61d874eff64e6312de98d56ef1aa3d"><code>b386d59</code></a>
ENH Support models with low precision float dtypes (<a
href="https://redirect.github.com/huggingface/peft/issues/3055">#3055</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/cf9709c5a6d085f34b98727050109d267c342f0a"><code>cf9709c</code></a>
FIX Correct scaling with DARE merging (<a
href="https://redirect.github.com/huggingface/peft/issues/3152">#3152</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/efe0fe6acd72cb3bf1ebfc807c159bf0b9481f5e"><code>efe0fe6</code></a>
Bump the third-party-actions group with 8 updates (<a
href="https://redirect.github.com/huggingface/peft/issues/3125">#3125</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/07a1db6f29086efe0abdc2c296ef455da0412188"><code>07a1db6</code></a>
ENH Checkpoint saving with Tensor Parallel (<a
href="https://redirect.github.com/huggingface/peft/issues/3096">#3096</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/f62f54b66b640c030e315bfe1ff340fe16c6c7af"><code>f62f54b</code></a>
TST Enable arrow xpu tests (<a
href="https://redirect.github.com/huggingface/peft/issues/3145">#3145</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/98465930f7c9666ff952f4c67893620a9ef1e2c3"><code>9846593</code></a>
CI Move slow EVA tests to nightly GPU CI (<a
href="https://redirect.github.com/huggingface/peft/issues/3108">#3108</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/12d872a0ac091beba4f54800e3827f2b3cb478f2"><code>12d872a</code></a>
FIX CI Remove invalid arg in nightly GPU test call (<a
href="https://redirect.github.com/huggingface/peft/issues/3104">#3104</a>)</li>
<li><a
href="https://github.com/huggingface/peft/commit/9e86c043f39d6b931b5fc63f14761ce0fd878505"><code>9e86c04</code></a>
DOC: Section on weight tying with LoRA (<a
href="https://redirect.github.com/huggingface/peft/issues/3066">#3066</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/huggingface/peft/compare/v0.18.1...v0.19.0">compare
view</a></li>
</ul>
</details>
<br />
Updates `datasets` from 3.6.0 to 4.8.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/huggingface/datasets/releases">datasets's
releases</a>.</em></p>
<blockquote>
<h2>4.8.4</h2>
<h2>What's Changed</h2>
<ul>
<li>Support latest torchvision by <a
href="https://github.com/lhoestq"><code>@lhoestq</code></a> in <a
href="https://redirect.github.com/huggingface/datasets/pull/8087">huggingface/datasets#8087</a></li>
<li>fix regression when loading JSON with one file = one object by <a
href="https://github.com/lhoestq"><code>@lhoestq</code></a> in <a
href="https://redirect.github.com/huggingface/datasets/pull/8086">huggingface/datasets#8086</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/huggingface/datasets/compare/4.8.3...4.8.4">https://github.com/huggingface/datasets/compare/4.8.3...4.8.4</a></p>
<h2>4.8.3</h2>
<h2>What's Changed</h2>
<ul>
<li>Fix split_dataset_by_node step by <a
href="https://github.com/lhoestq"><code>@lhoestq</code></a> in <a
href="https://redirect.github.com/huggingface/datasets/pull/8081">huggingface/datasets#8081</a></li>
<li>Fix docstring of Json.cast_storage by <a
href="https://github.com/albertvillanova"><code>@albertvillanova</code></a>
in <a
href="https://redirect.github.com/huggingface/datasets/pull/8080">huggingface/datasets#8080</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/huggingface/datasets/compare/4.8.2...4.8.3">https://github.com/huggingface/datasets/compare/4.8.2...4.8.3</a></p>
<h2>4.8.2</h2>
<h2>What's Changed</h2>
<ul>
<li>Json type for empty struct by <a
href="https://github.com/lhoestq"><code>@lhoestq</code></a> in <a
href="https://redirect.github.com/huggingface/datasets/pull/8074">huggingface/datasets#8074</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/huggingface/datasets/compare/4.8.1...4.8.2">https://github.com/huggingface/datasets/compare/4.8.1...4.8.2</a></p>
<h2>4.8.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Fix formatted iter arrow double yield by <a
href="https://github.com/HaukurPall"><code>@HaukurPall</code></a> in <a
href="https://redirect.github.com/huggingface/datasets/pull/8063">huggingface/datasets#8063</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/huggingface/datasets/compare/4.8.0...4.8.1">https://github.com/huggingface/datasets/compare/4.8.0...4.8.1</a></p>
<h2>4.8.0</h2>
<h2>Dataset Features</h2>
<ul>
<li>
<p>Read (and write) from <a href="https://huggingface.co/storage">HF
Storage Buckets</a>: load raw data, process and save to Dataset Repos by
<a href="https://github.com/lhoestq"><code>@lhoestq</code></a> in <a
href="https://redirect.github.com/huggingface/datasets/pull/8064">huggingface/datasets#8064</a></p>
<pre lang="python"><code>from datasets import load_dataset
# load raw data from a Storage Bucket on HF
ds = load_dataset("buckets/username/data-bucket",
data_files=[&quo…1 parent f85174d commit 16150e4
6 files changed
Lines changed: 16 additions & 16 deletions
File tree
- samples
- tests/python_tests
- tools
- llm_bench
- who_what_benchmark
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
3 | 3 | | |
4 | 4 | | |
5 | 5 | | |
6 | | - | |
| 6 | + | |
7 | 7 | | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
11 | 11 | | |
12 | 12 | | |
13 | 13 | | |
14 | | - | |
| 14 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
1 | 1 | | |
2 | 2 | | |
3 | 3 | | |
4 | | - | |
| 4 | + | |
5 | 5 | | |
6 | 6 | | |
7 | 7 | | |
| |||
17 | 17 | | |
18 | 18 | | |
19 | 19 | | |
20 | | - | |
| 20 | + | |
21 | 21 | | |
22 | 22 | | |
23 | 23 | | |
24 | | - | |
| 24 | + | |
25 | 25 | | |
26 | 26 | | |
27 | 27 | | |
28 | 28 | | |
29 | | - | |
| 29 | + | |
30 | 30 | | |
31 | 31 | | |
32 | 32 | | |
33 | | - | |
| 33 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
1 | 1 | | |
2 | | - | |
| 2 | + | |
3 | 3 | | |
4 | 4 | | |
5 | 5 | | |
| |||
34 | 34 | | |
35 | 35 | | |
36 | 36 | | |
37 | | - | |
| 37 | + | |
38 | 38 | | |
39 | 39 | | |
40 | 40 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
1 | 1 | | |
2 | 2 | | |
3 | | - | |
| 3 | + | |
4 | 4 | | |
5 | 5 | | |
6 | | - | |
7 | | - | |
| 6 | + | |
| 7 | + | |
8 | 8 | | |
9 | 9 | | |
10 | 10 | | |
| |||
21 | 21 | | |
22 | 22 | | |
23 | 23 | | |
24 | | - | |
| 24 | + | |
25 | 25 | | |
26 | 26 | | |
27 | 27 | | |
28 | | - | |
| 28 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
57 | 57 | | |
58 | 58 | | |
59 | 59 | | |
60 | | - | |
61 | | - | |
| 60 | + | |
| 61 | + | |
62 | 62 | | |
63 | 63 | | |
64 | 64 | | |
| |||
0 commit comments