Skip to content
This repository was archived by the owner on Sep 18, 2024. It is now read-only.

Commit dab51f7

Browse files
authored
Release note (v2.9) (#5114)
1 parent 8e17010 commit dab51f7

File tree

3 files changed

+107
-2
lines changed

3 files changed

+107
-2
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ NNI automates feature engineering, neural architecture search, hyperparameter tu
2020

2121
## What's NEW! &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>
2222

23-
* **New release**: [v2.8 is available](https://github.com/microsoft/nni/releases/tag/v2.8) - _released on June-22-2022_
23+
* **New release**: [v2.9 is available](https://github.com/microsoft/nni/releases/tag/v2.9) - _released on Sept-8-2022_
2424
* **New demo available**: [Youtube entry](https://www.youtube.com/channel/UCKcafm6861B2mnYhPbZHavw) | [Bilibili 入口](https://space.bilibili.com/1649051673) - _last updated on June-22-2022_
2525
* **New research paper**: [SparTA: Deep-Learning Model Sparsity via Tensor-with-Sparsity-Attribute](https://www.usenix.org/system/files/osdi22-zheng-ningxin.pdf) - _published in OSDI 2022_
2626
* **New research paper**: [Privacy-preserving Online AutoML for Domain-Specific Face Detection](https://openaccess.thecvf.com/content/CVPR2022/papers/Yan_Privacy-Preserving_Online_AutoML_for_Domain-Specific_Face_Detection_CVPR_2022_paper.pdf) - _published in CVPR 2022_

docs/source/conf.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
version = ''
3232
# The full version, including alpha/beta/rc tags
3333
# FIXME: this should be written somewhere globally
34-
release = 'v2.8'
34+
release = 'v2.9'
3535

3636
# -- General configuration ---------------------------------------------------
3737

docs/source/release.rst

+105
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,111 @@
55
Change Log
66
==========
77

8+
Release 2.9 - 9/8/2022
9+
----------------------
10+
11+
Neural Architecture Search
12+
^^^^^^^^^^^^^^^^^^^^^^^^^^
13+
14+
* New tutorial of model space hub and one-shot strategy.
15+
(`tutorial <https://nni.readthedocs.io/en/v2.9/tutorials/darts.html>`__)
16+
* Add pretrained checkpoints to AutoFormer.
17+
(`doc <https://nni.readthedocs.io/en/v2.9/reference/nas/search_space.htm.retiarii.hub.pytorch.AutoformerSpace>`__)
18+
* Support loading checkpoint of a trained supernet in a subnet.
19+
(`doc <https://nni.readthedocs.io/en/v2.9/reference/nas/strategy.htm.retiarii.strategy.RandomOneShot>`__)
20+
* Support view and resume of NAS experiment.
21+
(`doc <https://nni.readthedocs.io/en/v2.9/reference/nas/others.htm.retiarii.experiment.pytorch.RetiariiExperiment.resume>`__)
22+
23+
Enhancements
24+
""""""""""""
25+
26+
* Support ``fit_kwargs`` in lightning evaluator.
27+
(`doc <https://nni.readthedocs.io/en/v2.9/reference/nas/evaluator.html#nni.retiarii.evaluator.pytorch.Lightning>`__)
28+
* Support ``drop_path`` and ``auxiliary_loss`` in NASNet.
29+
(`doc <https://nni.readthedocs.io/en/v2.9/reference/nas/search_space.html#nasnet>`__)
30+
* Support gradient clipping in DARTS.
31+
(`doc <https://nni.readthedocs.io/en/v2.9/reference/nas/strategy.html#nni.retiarii.strategy.DARTS>`__)
32+
* Add ``export_probs`` to monitor the architecture weights.
33+
* Rewrite configure_optimizers, functions to step optimizers /
34+
schedulers, along with other hooks for simplicity, and to be
35+
compatible with latest lightning (v1.7).
36+
* Align implementation of DifferentiableCell with DARTS official repo.
37+
* Re-implementation of ProxylessNAS.
38+
* Move ``nni.retiarii`` code-base to ``nni.nas``.
39+
40+
Bug fixes
41+
"""""""""
42+
43+
* Fix a performance issue caused by tensor formatting in ``weighted_sum``.
44+
* Fix a misuse of lambda expression in NAS-Bench-201 search space.
45+
* Fix the gumbel temperature schedule in Gumbel DARTS.
46+
* Fix the architecture weight sharing when sharing labels in differentiable strategies.
47+
* Fix the memo reusing in exporting differentiable cell.
48+
49+
Compression
50+
^^^^^^^^^^^
51+
52+
* New tutorial of pruning transformer model.
53+
(`tutorial <https://nni.readthedocs.io/en/v2.9/tutorials/pruning_bert_glue.html>`__)
54+
* Add ``TorchEvaluator``, ``LightningEvaluator``, ``TransformersEvaluator``
55+
to ease the expression of training logic in pruner.
56+
(`doc <https://nni.readthedocs.io/en/v2.9/compression/compression_evaluator.html>`__,
57+
`API <https://nni.readthedocs.io/en/v2.9/reference/compression/evaluator.html>`__)
58+
59+
Enhancements
60+
""""""""""""
61+
62+
* Promote all pruner API using ``Evaluator``, the old API is deprecated and will be removed in v3.0.
63+
(`doc <https://nni.readthedocs.io/en/v2.9/reference/compression/pruner.html>`__)
64+
* Greatly enlarge the set of supported operators in pruning speedup via automatic operator conversion.
65+
* Support ``lr_scheduler`` in pruning by using ``Evaluator``.
66+
* Support pruning NLP task in ``ActivationAPoZRankPruner`` and ``ActivationMeanRankPruner``.
67+
* Add ``training_steps``, ``regular_scale``, ``movement_mode``, ``sparse_granularity`` for ``MovementPruner``.
68+
(`doc <https://nni.readthedocs.io/en/v2.9/reference/compression/pruner.html#movement-pruner>`__)
69+
* Add ``GroupNorm`` replacement in pruning speedup. Thanks external contributor
70+
`@cin-xing <https://github.com/cin-xing>`__.
71+
* Optimize ``balance`` mode performance in ``LevelPruner``.
72+
73+
Bug fixes
74+
"""""""""
75+
76+
* Fix the invalid ``dependency_aware`` mode in scheduled pruners.
77+
* Fix the bug where ``bias`` mask cannot be generated.
78+
* Fix the bug where ``max_sparsity_per_layer`` has no effect.
79+
* Fix ``Linear`` and ``LayerNorm`` speedup replacement in NLP task.
80+
* Fix tracing ``LightningModule`` failed in ``pytorch_lightning >= 1.7.0``.
81+
82+
Hyper-parameter optimization
83+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
84+
85+
* Fix the bug that weights are not defined correctly in ``adaptive_parzen_normal`` of TPE.
86+
87+
Training service
88+
^^^^^^^^^^^^^^^^
89+
90+
* Fix trialConcurrency bug in K8S training service: use``${envId}_run.sh`` to replace ``run.sh``.
91+
* Fix upload dir bug in K8S training service: use a separate working
92+
directory for each experiment. Thanks external contributor
93+
`@amznero <https://github.com/amznero>`__.
94+
95+
Web portal
96+
^^^^^^^^^^
97+
98+
* Support dict keys in Default metric chart in the detail page.
99+
* Show experiment error message with small popup windows in the bottom right of the page.
100+
* Upgrade React router to v6 to fix index router issue.
101+
* Fix the issue of details page crashing due to choices containing ``None``.
102+
* Fix the issue of missing dict intermediate dropdown in comparing trials dialog.
103+
104+
Known issues
105+
^^^^^^^^^^^^
106+
107+
* Activation based pruner can not support ``[batch, seq, hidden]``.
108+
* Failed trials are NOT auto-submitted when experiment is resumed
109+
(`[FEAT]: resume waiting/running, dedup on tuner side
110+
(TPE-only) #4931 <https://github.com/microsoft/nni/pull/4931>`__ is
111+
reverted due to its pitfalls).
112+
8113
Release 2.8 - 6/22/2022
9114
-----------------------
10115

0 commit comments

Comments
 (0)