Skip to content
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/build/web.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ description: Learn how to build ONNX Runtime from source to deploy on the web
nav_order: 4
redirect_from: /docs/how-to/build/web
---

Cloud to the Edge – This layer ensures flexibility and performance wherever your workloads run. Foundry is designed to extend seamlessly from the cloud to the edge, and Foundry Local is already live on hundreds of millions of Windows (and Mac) devices.
# Build ONNX Runtime for Web
{: .no_toc }

Expand Down Expand Up @@ -168,7 +168,7 @@ This is the last stage in the build process, please follow the sections in a seq

- Download artifacts from pipeline manually.

you can download prebuilt WebAssembly artifacts from [Windows WebAssembly CI Pipeline](https://dev.azure.com/onnxruntime/onnxruntime/_build?definitionId=161&_a=summary). Select a build, download artifact "Release_wasm" and unzip. See instructions below to put files into destination folders.
you can download prebuilt WebAssembly artifacts from [Windows WebAssembly CI Pipeline](https://github.com/microsoft/onnxruntime/actions/workflows/web.yml). Select a build, download artifact "Release_wasm" and unzip. See instructions below to put files into destination folders.

- Build WebAssembly artifacts.

Expand Down
2 changes: 1 addition & 1 deletion docs/execution-providers/TensorRTRTX-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Currently TensorRT RTX supports RTX GPUs from Ampere or later architectures. Sup
Please select the Nvidia TensorRT RTX version of Onnx Runtime: https://onnxruntime.ai/docs/install. (TODO!)

## Build from source
See [Build instructions](../build/eps.md#TensorRT-RTX).
See [Build instructions](../build/eps.md#tensorrt-rtx).

## Requirements

Expand Down
5 changes: 2 additions & 3 deletions docs/extensions/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,10 @@ has_children: true
nav_order: 7
---

# ONNXRuntime-Extensions
# ONNX Runtime Extensions

[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status%2Fmicrosoft.onnxruntime-extensions?branchName=main)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=209&branchName=main)

ONNXRuntime-Extensions is a library that extends the capability of the ONNX models and inference with ONNX Runtime, via the ONNX Runtime custom operator interface. It includes a set of Custom Operators to support common model pre and post-processing for audio, vision, text, and language models. As with ONNX Runtime, Extensions also supports multiple languages and platforms (Python on Windows/Linux/macOS, Android and iOS mobile platforms and Web assembly for web).
ONNX Runtime Extensions is a library that extends the capability of the ONNX models and inference with ONNX Runtime, via the ONNX Runtime custom operator interface. It includes a set of Custom Operators to support common model pre and post-processing for audio, vision, text, and language models. As with ONNX Runtime, Extensions also supports multiple languages and platforms (Python on Windows/Linux/macOS, Android and iOS mobile platforms and Web assembly for web).

The basic workflow is to add the custom operators to an ONNX model and then to perform inference on the enhanced model with ONNX Runtime and ONNXRuntime-Extensions packages.

Expand Down
Loading