Skip to content

Commit ee8a380

Browse files
Release: v2.0.0
1 parent f1f6ae4 commit ee8a380

File tree

3 files changed

+9
-20
lines changed

3 files changed

+9
-20
lines changed

README.md

Lines changed: 5 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -41,9 +41,11 @@ If you'd like to use the accelerator-specific features of Optimum, you can check
4141
| Accelerator | Installation |
4242
| :---------------------------------------------------------------------------------- | :-------------------------------------------------------------------------- |
4343
| [ONNX](https://huggingface.co/docs/optimum-onnx/en/index) | `pip install --upgrade --upgrade-strategy eager optimum[onnx]` |
44+
| [ONNX Runtime](https://huggingface.co/docs/optimum-onnx/onnxruntime/overview) | `pip install --upgrade --upgrade-strategy eager optimum[onnxruntime]` |
45+
| [ONNX Runtime GPU](https://huggingface.co/docs/optimum-onnx/onnxruntime/overview) | `pip install --upgrade --upgrade-strategy eager optimum[onnxruntime-gpu]` |
4446
| [Intel Neural Compressor](https://huggingface.co/docs/optimum/intel/index) | `pip install --upgrade --upgrade-strategy eager optimum[neural-compressor]` |
4547
| [OpenVINO](https://huggingface.co/docs/optimum/intel/index) | `pip install --upgrade --upgrade-strategy eager optimum[openvino]` |
46-
| [IPEX](https://huggingface.co/docs/optimum/intel/ipex/inference) | `pip install --upgrade --upgrade-strategy eager optimum[ipex]` |
48+
| [IPEX](https://huggingface.co/docs/optimum/intel/index) | `pip install --upgrade --upgrade-strategy eager optimum[ipex]` |
4749
| [NVIDIA TensorRT-LLM](https://huggingface.co/docs/optimum/main/en/nvidia_overview) | `docker run -it --gpus all --ipc host huggingface/optimum-nvidia` |
4850
| [AMD Instinct GPUs and Ryzen AI NPU](https://huggingface.co/docs/optimum/amd/index) | `pip install --upgrade --upgrade-strategy eager optimum[amd]` |
4951
| [AWS Trainum & Inferentia](https://huggingface.co/docs/optimum-neuron/index) | `pip install --upgrade --upgrade-strategy eager optimum[neuronx]` |
@@ -79,15 +81,15 @@ The [export](https://huggingface.co/docs/optimum/exporters/overview) and optimiz
7981

8082
### ONNX + ONNX Runtime
8183

82-
🚨🚨🚨 ONNX integration moving to [`optimum-onnx`](https://github.com/huggingface/optimum-onnx) so make sure to follow the installation instructions 🚨🚨🚨
84+
🚨🚨🚨 ONNX integration was moved to [`optimum-onnx`](https://github.com/huggingface/optimum-onnx) so make sure to follow the installation instructions 🚨🚨🚨
8385

8486
Before you begin, make sure you have all the necessary libraries installed :
8587

8688
```bash
8789
pip install --upgrade --upgrade-strategy eager optimum[onnx]
8890
```
8991

90-
It is possible to export Transformers, Diffusers, Sentence Transformers and timm models to the [ONNX](https://onnx.ai/) format and perform graph optimization as well as quantization easily.
92+
It is possible to export Transformers, Diffusers, Sentence Transformers and Timm models to the [ONNX](https://onnx.ai/) format and perform graph optimization as well as quantization easily.
9193

9294
For more information on the ONNX export, please check the [documentation](https://huggingface.co/docs/optimum-onnx/en/onnx/usage_guides/export_a_model).
9395

@@ -149,13 +151,3 @@ pip install --upgrade --upgrade-strategy eager optimum[neuronx]
149151
```
150152

151153
You can find examples in the [documentation](https://huggingface.co/docs/optimum-neuron/index) and in the [tutorials](https://huggingface.co/docs/optimum-neuron/tutorials/fine_tune_bert).
152-
153-
### ONNX Runtime
154-
155-
Before you begin, make sure you have all the necessary libraries installed :
156-
157-
```bash
158-
pip install optimum[onnxruntime-training]
159-
```
160-
161-
You can find examples in the [documentation](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/trainer) and in the [examples](https://github.com/huggingface/optimum/tree/main/examples/onnxruntime/training).

optimum/version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,4 +12,4 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414

15-
__version__ = "2.0.0.dev0"
15+
__version__ = "2.0.0"

setup.py

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -43,12 +43,6 @@
4343
BENCHMARK_REQUIRE = ["optuna", "tqdm", "scikit-learn", "seqeval", "torchvision", "evaluate>=0.2.0"]
4444

4545
EXTRAS_REQUIRE = {
46-
###########################################################################
47-
# until optimum-onnx is released on PyPI
48-
"onnx": "optimum-onnx @ git+https://github.com/huggingface/optimum-onnx.git",
49-
"onnxruntime": "optimum-onnx[onnxruntime] @ git+https://github.com/huggingface/optimum-onnx.git",
50-
"onnxruntime-gpu": "optimum-onnx[onnxruntime-gpu] @ git+https://github.com/huggingface/optimum-onnx.git",
51-
###########################################################################
5246
"amd": "optimum-amd",
5347
"furiosa": "optimum-furiosa",
5448
"graphcore": "optimum-graphcore",
@@ -58,6 +52,9 @@
5852
"nncf": "optimum-intel[nncf]>=1.23.0",
5953
"neural-compressor": "optimum-intel[neural-compressor]>=1.23.0",
6054
"openvino": "optimum-intel[openvino]>=1.23.0",
55+
"onnx": "optimum-onnx",
56+
"onnxruntime": "optimum-onnx[onnxruntime]",
57+
"onnxruntime-gpu": "optimum-onnx[onnxruntime-gpu]",
6158
"quanto": "optimum-quanto>=0.2.4",
6259
###########################################################################
6360
"dev": TESTS_REQUIRE + QUALITY_REQUIRE,

0 commit comments

Comments
 (0)