You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/build/web.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ description: Learn how to build ONNX Runtime from source to deploy on the web
5
5
nav_order: 4
6
6
redirect_from: /docs/how-to/build/web
7
7
---
8
-
8
+
Cloud to the Edge – This layer ensures flexibility and performance wherever your workloads run. Foundry is designed to extend seamlessly from the cloud to the edge, and Foundry Local is already live on hundreds of millions of Windows (and Mac) devices.
9
9
# Build ONNX Runtime for Web
10
10
{: .no_toc }
11
11
@@ -168,7 +168,7 @@ This is the last stage in the build process, please follow the sections in a seq
168
168
169
169
- Download artifacts from pipeline manually.
170
170
171
-
you can download prebuilt WebAssembly artifacts from [Windows WebAssembly CI Pipeline](https://dev.azure.com/onnxruntime/onnxruntime/_build?definitionId=161&_a=summary). Select a build, download artifact "Release_wasm" and unzip. See instructions below to put files into destination folders.
171
+
you can download prebuilt WebAssembly artifacts from [Windows WebAssembly CI Pipeline](https://github.com/microsoft/onnxruntime/actions/workflows/web.yml). Select a build, download artifact "Release_wasm" and unzip. See instructions below to put files into destination folders.
ONNXRuntime-Extensions is a library that extends the capability of the ONNX models and inference with ONNX Runtime, via the ONNX Runtime custom operator interface. It includes a set of Custom Operators to support common model pre and post-processing for audio, vision, text, and language models. As with ONNX Runtime, Extensions also supports multiple languages and platforms (Python on Windows/Linux/macOS, Android and iOS mobile platforms and Web assembly for web).
10
+
ONNX Runtime Extensions is a library that extends the capability of the ONNX models and inference with ONNX Runtime, via the ONNX Runtime custom operator interface. It includes a set of Custom Operators to support common model pre and post-processing for audio, vision, text, and language models. As with ONNX Runtime, Extensions also supports multiple languages and platforms (Python on Windows/Linux/macOS, Android and iOS mobile platforms and Web assembly for web).
12
11
13
12
The basic workflow is to add the custom operators to an ONNX model and then to perform inference on the enhanced model with ONNX Runtime and ONNXRuntime-Extensions packages.
Copy file name to clipboardExpand all lines: docs/genai/howto/build-model.md
+1-153Lines changed: 1 addition & 153 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,158 +8,6 @@ nav_order: 3
8
8
---
9
9
10
10
# Generate models using Model Builder
11
-
{: .no_toc }
12
11
13
-
* TOC placeholder
14
-
{:toc}
12
+
Refer to [model builder guide](https://github.com/microsoft/onnxruntime-genai/blob/main/src/python/py/models/README.md) for the latest documentation.
15
13
16
-
The model builder greatly accelerates creating optimized and quantized ONNX models that run with the ONNX Runtime generate() API.
17
-
18
-
## Current Support
19
-
The tool currently supports the following model architectures.
20
-
21
-
- Gemma
22
-
- LLaMA
23
-
- Mistral
24
-
- Phi
25
-
26
-
## Installation
27
-
28
-
Model builder is available as an [Olive](https://github.com/microsoft/olive) pass. It is also shipped as part of the onnxruntime-genai Python package. You can also download and run it standalone.
29
-
30
-
In any case, you need to have the following packages installed.
This scenario is where your PyTorch model is already downloaded locally (either in the default Hugging Face cache directory or in a local folder on disk).
This scenario is where your PyTorch model has been customized or finetuned for one of the currently supported model architectures and your model can be loaded in Hugging Face.
This scenario is for when you want to have control over some specific settings. The below example shows how you can pass key-value arguments to `--extra_options`.
To see all available options through `--extra_options`, please use the `help` commands in the `Full Usage` section above.
114
-
115
-
### Config Only
116
-
This scenario is for when you already have your optimized and/or quantized ONNX model and you need to create the config files to run with ONNX Runtime generate() API.
Afterwards, please open the `genai_config.json` file in the output folder and modify the fields as needed for your model. You should store your ONNX model in the output folder as well.
126
-
127
-
### Unit Testing Models
128
-
This scenario is where your PyTorch model is already downloaded locally (either in the default Hugging Face cache directory or in a local folder on disk). If it is not already downloaded locally, here is an example of how you can download it.
129
-
130
-
```
131
-
from transformers import AutoModelForCausalLM, AutoTokenizer
132
-
133
-
model_name = "your_model_name"
134
-
cache_dir = "cache_dir_to_save_hf_files"
135
-
136
-
model = AutoModelForCausalLM.from_pretrained(model_name, cache_dir=cache_dir)
description: How to configure the past present share buffer using the ONNX Runtime generate() API
4
+
has_children: false
5
+
parent: How to
6
+
grand_parent: Generate API (Preview)
7
+
nav_order: 6
8
+
---
9
+
10
+
# How to configure the past present share buffer
11
+
12
+
The past present share buffer is an optimization that can be used to save memory and processing time.
13
+
14
+
When buffer sharing is used, the past and present KV cache buffers point to the same memory block.
15
+
16
+
When buffer sharing is not used, the present KV cache buffers are re-allocated before every forward pass of the model and copied to the past KV cache buffers.
For example, for the [4-bit quantized Phi-4-mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct-onnx) model, with a batch size of 1 and a max length of 4k, the size of the cache is: $1 * 8 * 4096 * 128 = 4GB$
35
+
36
+
37
+
Note that the size of the cache is largely determined the value of the max_length parameter.
38
+
39
+
40
+
### When past_present_share_buffer is false
41
+
42
+
Size of past KV caches (bytes) = $batch\_size * num\_key\_value\_heads * past\_sequence\_length * head\_size$
For example, for the [4-bit quantized DeepSeek R1 Qwen 1.5B](https://huggingface.co/onnxruntime/DeepSeek-R1-Distill-ONNX) model, with a batch size of 1 and a past sequence length of 1k, the size of the past cache is: $1 * 2 * 1024 * 128 = 256M$ and the size of the present cache is: $1 * 2 * 1025 * 128 = 257M$
Copy file name to clipboardExpand all lines: docs/genai/index.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,9 +13,11 @@ Run generative AI models with ONNX Runtime.
13
13
14
14
See the source code here: [https://github.com/microsoft/onnxruntime-genai](https://github.com/microsoft/onnxruntime-genai)
15
15
16
-
This library provides the generative AI loop for ONNX models, including inference with ONNX Runtime, logits processing, search and sampling, and KV cache management.
16
+
This library provides the generative AI loop for ONNX models, including tokenization and other pre-processing, inference with ONNX Runtime, logits processing, search and sampling, and KV cache management.
17
17
18
18
Users can call a high level `generate()` method, or run each iteration of the model in a loop, generating one token at a time, and optionally updating generation parameters inside the loop.
19
19
20
20
It has support for greedy/beam search and TopP, TopK sampling to generate token sequences and built-in logits processing like repetition penalties. You can also easily add custom scoring.
21
21
22
+
Other supported features include applying chat templates and structured output (for tool calling)
0 commit comments