Skip to content

Commit a7111ed

Browse files
authored
[Doc]: onnxruntime (open-mmlab#131)
* add ort doc * update * update * update
1 parent 4c1f62f commit a7111ed

File tree

3 files changed

+219
-4
lines changed

3 files changed

+219
-4
lines changed

docs/backends/ncnn.md

+7-3
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ You should ensure your gcc satisfies `gcc >= 6`.
2424
cmake -DNCNN_VULKAN=ON -DNCNN_SYSTEM_GLSLANG=ON -DNCNN_BUILD_EXAMPLES=ON -DNCNN_PYTHON=ON -DNCNN_BUILD_TOOLS=ON -DNCNN_BUILD_BENCHMARK=ON -DNCNN_BUILD_TESTS=ON ..
2525
make install
2626
```
27+
2728
- Install pyncnn module
2829
```bash
2930
cd ncnn/python
@@ -42,24 +43,27 @@ cmake -DBUILD_NCNN_OPS=ON ..
4243
make -j$(nproc)
4344
```
4445

45-
If you haven't installed NCNN in default path, please add `-DNCNN_DIR` flag in cmake.
46+
If you haven't installed NCNN in the default path, please add `-DNCNN_DIR` flag in cmake.
4647
4748
```bash
4849
cmake -DBUILD_NCNN_OPS=ON -DNCNN_DIR=${NCNN_DIR} ..
4950
make -j$(nproc)
5051
```
5152
5253
### Convert model
53-
- This follows the tutorial on [How to convert model](tutorials/how_to_convert_model.md).
54+
55+
- This follows the tutorial on [How to convert model](../tutorials/how_to_convert_model.md).
5456
- The converted model has two files: `.param` and `.bin`, as model structure file and weight file respectively.
5557
5658
### FAQs
59+
5760
1. When running ncnn models for inference with custom ops, it fails and shows the error message like:
5861
59-
```
62+
```bash
6063
TypeError: register mm custom layers(): incompatible function arguments. The following argument types are supported:
6164
1.(ar0: ncnn:Net) -> int
6265
6366
Invoked with: <ncnn.ncnn.Net object at 0x7f7fc4038bb0>
6467
```
68+
6569
This is because of the failure to bind ncnn C++ library to pyncnn. You should build pyncnn from C++ ncnn source code, but not by `pip install`

docs/backends/onnxruntime.md

+76
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,79 @@
11
## ONNX Runtime Support
22

3+
### Introduction of ONNX Runtime
4+
5+
**ONNX Runtime** is a cross-platform inferencing and training accelerator compatible with many popular ML/DNN frameworks. Check its [github](https://github.com/microsoft/onnxruntime) for more information.
6+
37
### Installation
8+
9+
*Please note that only **onnxruntime>=1.8.1** of CPU version on Linux platform is supported by now.*
10+
11+
- Install ONNX Runtime python package
12+
13+
```bash
14+
pip install onnxruntime==1.8.1
15+
```
16+
17+
### Build custom ops
18+
19+
#### Prerequisite
20+
21+
- Download `onnxruntime-linux` from ONNX Runtime [releases](https://github.com/microsoft/onnxruntime/releases/tag/v1.8.1), extract it, expose `ONNXRUNTIME_DIR` and finally add the lib path to `LD_LIBRARY_PATH` as below:
22+
23+
```bash
24+
wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz
25+
26+
tar -zxvf onnxruntime-linux-x64-1.8.1.tgz
27+
cd onnxruntime-linux-x64-1.8.1
28+
export ONNXRUNTIME_DIR=$(pwd)
29+
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH
30+
```
31+
32+
#### Build on Linux
33+
34+
```bash
35+
cd ${MMDEPLOY_DIR} # To MMDeploy root directory
36+
mkdir build
37+
cd build
38+
cmake -DBUILD_ONNXRUNTIME_OPS=ON -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} ..
39+
make -j10
40+
```
41+
42+
### How to convert a model
43+
44+
- You could follow the instructions of tutorial [How to convert model](../tutorials/how_to_convert_model.md)
45+
46+
### List of supported custom ops
47+
48+
| Operator | CPU | GPU | MMDeploy Releases |
49+
| :----------------------------------------------------: | :---: | :---: | :-----------: |
50+
| [RoIAlign](../ops/onnxruntime.md#roialign) | Y | N | master |
51+
| [grid_sampler](../ops/onnxruntime.md#grid_sampler) | Y | N | master |
52+
| [MMCVModulatedDeformConv2d](../ops/onnxruntime.md#mmcvmodulateddeformconv2d) | Y | N | master |
53+
54+
### How to add a new custom op
55+
56+
#### Reminder
57+
58+
- The custom operator is not included in [supported operator list](https://github.com/microsoft/onnxruntime/blob/master/docs/OperatorKernels.md) in ONNX Runtime.
59+
- The custom operator should be able to be exported to ONNX.
60+
61+
#### Main procedures
62+
63+
Take custom operator `roi_align` for example.
64+
65+
1. Create a `roi_align` directory in ONNX Runtime source directory `backend_ops/onnxruntime/`
66+
2. Add header and source file into `roi_align` directory `backend_ops/onnxruntime/roi_align/`
67+
3. Add unit test into `tests/test_ops/test_ops.py`
68+
Check [here](../../tests/test_ops/test_ops.py) for examples.
69+
70+
**Finally, welcome to send us PR of adding custom operators for ONNX Runtime in MMDeploy.** :nerd_face:
71+
72+
### FAQs
73+
74+
- None
75+
76+
### References
77+
78+
- [How to export Pytorch model with custom op to ONNX and run it in ONNX Runtime](https://github.com/onnx/tutorials/blob/master/PyTorchCustomOperator/README.md)
79+
- [How to add a custom operator/kernel in ONNX Runtime](https://github.com/microsoft/onnxruntime/blob/master/docs/AddingCustomOp.md)

docs/ops/onnxruntime.md

+136-1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,138 @@
11
## ONNX Runtime Ops
22

3-
### Installation
3+
<!-- TOC -->
4+
5+
- [ONNX Runtime Ops](#onnx-runtime-ops)
6+
- [RoIAlign](#roialign)
7+
- [Description](#description)
8+
- [Parameters](#parameters)
9+
- [Inputs](#inputs)
10+
- [Outputs](#outputs)
11+
- [Type Constraints](#type-constraints)
12+
- [grid_sampler](#grid_sampler)
13+
- [Description](#description-1)
14+
- [Parameters](#parameters-1)
15+
- [Inputs](#inputs-1)
16+
- [Outputs](#outputs-1)
17+
- [Type Constraints](#type-constraints-1)
18+
- [MMCVModulatedDeformConv2d](#mmcvmodulateddeformconv2d)
19+
- [Description](#description-2)
20+
- [Parameters](#parameters-2)
21+
- [Inputs](#inputs-2)
22+
- [Outputs](#outputs-2)
23+
- [Type Constraints](#type-constraints-2)
24+
25+
<!-- TOC -->
26+
27+
### RoIAlign
28+
29+
#### Description
30+
31+
Perform RoIAlign on output feature, used in bbox_head of most two-stage detectors.
32+
33+
#### Parameters
34+
35+
| Type | Parameter | Description |
36+
| ------- | ---------------- | ------------------------------------------------------------------------------------------------------------- |
37+
| `int` | `output_height` | height of output roi |
38+
| `int` | `output_width` | width of output roi |
39+
| `float` | `spatial_scale` | used to scale the input boxes |
40+
| `int` | `sampling_ratio` | number of input samples to take for each output sample. `0` means to take samples densely for current models. |
41+
| `str` | `mode` | pooling mode in each bin. `avg` or `max` |
42+
| `int` | `aligned` | If `aligned=0`, use the legacy implementation in MMDetection. Else, align the results more perfectly. |
43+
44+
#### Inputs
45+
46+
<dl>
47+
<dt><tt>input</tt>: T</dt>
48+
<dd>Input feature map; 4D tensor of shape (N, C, H, W), where N is the batch size, C is the numbers of channels, H and W are the height and width of the data.</dd>
49+
<dt><tt>rois</tt>: T</dt>
50+
<dd>RoIs (Regions of Interest) to pool over; 2-D tensor of shape (num_rois, 5) given as [[batch_index, x1, y1, x2, y2], ...]. The RoIs' coordinates are the coordinate system of input.</dd>
51+
</dl>
52+
53+
#### Outputs
54+
55+
<dl>
56+
<dt><tt>feat</tt>: T</dt>
57+
<dd>RoI pooled output, 4-D tensor of shape (num_rois, C, output_height, output_width). The r-th batch element feat[r-1] is a pooled feature map corresponding to the r-th RoI RoIs[r-1].<dd>
58+
</dl>
59+
60+
#### Type Constraints
61+
62+
- T:tensor(float32)
63+
64+
### grid_sampler
65+
66+
#### Description
67+
68+
Perform sample from `input` with pixel locations from `grid`.
69+
70+
#### Parameters
71+
72+
| Type | Parameter | Description |
73+
| ----- | -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
74+
| `int` | `interpolation_mode` | Interpolation mode to calculate output values. (0: `bilinear` , 1: `nearest`) |
75+
| `int` | `padding_mode` | Padding mode for outside grid values. (0: `zeros`, 1: `border`, 2: `reflection`) |
76+
| `int` | `align_corners` | If `align_corners=1`, the extrema (`-1` and `1`) are considered as referring to the center points of the input's corner pixels. If `align_corners=0`, they are instead considered as referring to the corner points of the input's corner pixels, making the sampling more resolution agnostic. |
77+
78+
#### Inputs
79+
80+
<dl>
81+
<dt><tt>input</tt>: T</dt>
82+
<dd>Input feature; 4-D tensor of shape (N, C, inH, inW), where N is the batch size, C is the numbers of channels, inH and inW are the height and width of the data.</dd>
83+
<dt><tt>grid</tt>: T</dt>
84+
<dd>Input offset; 4-D tensor of shape (N, outH, outW, 2), where outH and outW are the height and width of offset and output. </dd>
85+
</dl>
86+
87+
#### Outputs
88+
89+
<dl>
90+
<dt><tt>output</tt>: T</dt>
91+
<dd>Output feature; 4-D tensor of shape (N, C, outH, outW).</dd>
92+
</dl>
93+
94+
#### Type Constraints
95+
96+
- T:tensor(float32, Linear)
97+
98+
### MMCVModulatedDeformConv2d
99+
100+
#### Description
101+
102+
Perform Modulated Deformable Convolution on input feature, read [Deformable ConvNets v2: More Deformable, Better Results](https://arxiv.org/abs/1811.11168?from=timeline) for detail.
103+
104+
#### Parameters
105+
106+
| Type | Parameter | Description |
107+
| -------------- | ------------------- | ------------------------------------------------------------------------------------- |
108+
| `list of ints` | `stride` | The stride of the convolving kernel. (sH, sW) |
109+
| `list of ints` | `padding` | Paddings on both sides of the input. (padH, padW) |
110+
| `list of ints` | `dilation` | The spacing between kernel elements. (dH, dW) |
111+
| `int` | `deformable_groups` | Groups of deformable offset. |
112+
| `int` | `groups` | Split input into groups. `input_channel` should be divisible by the number of groups. |
113+
114+
#### Inputs
115+
116+
<dl>
117+
<dt><tt>inputs[0]</tt>: T</dt>
118+
<dd>Input feature; 4-D tensor of shape (N, C, inH, inW), where N is the batch size, C is the number of channels, inH and inW are the height and width of the data.</dd>
119+
<dt><tt>inputs[1]</tt>: T</dt>
120+
<dd>Input offset; 4-D tensor of shape (N, deformable_group* 2* kH* kW, outH, outW), where kH and kW are the height and width of weight, outH and outW are the height and width of offset and output.</dd>
121+
<dt><tt>inputs[2]</tt>: T</dt>
122+
<dd>Input mask; 4-D tensor of shape (N, deformable_group* kH* kW, outH, outW), where kH and kW are the height and width of weight, outH and outW are the height and width of offset and output.</dd>
123+
<dt><tt>inputs[3]</tt>: T</dt>
124+
<dd>Input weight; 4-D tensor of shape (output_channel, input_channel, kH, kW).</dd>
125+
<dt><tt>inputs[4]</tt>: T, optional</dt>
126+
<dd>Input bias; 1-D tensor of shape (output_channel).</dd>
127+
</dl>
128+
129+
#### Outputs
130+
131+
<dl>
132+
<dt><tt>outputs[0]</tt>: T</dt>
133+
<dd>Output feature; 4-D tensor of shape (N, output_channel, outH, outW).</dd>
134+
</dl>
135+
136+
#### Type Constraints
137+
138+
- T:tensor(float32, Linear)

0 commit comments

Comments
 (0)