1
1
2
- ### My platform
2
+ ### My platform
3
3
4
4
* raspberry pi 3b
5
5
* 2022-04-04-raspios-bullseye-armhf-lite.img
6
6
* cpu: 4 core armv8, memory: 1G
7
7
8
8
9
9
10
- ### Install ncnn
10
+ ### Install ncnn
11
11
12
- #### 1. dependencies
13
- ```
14
- $ python -m pip install onnx-simplifier
15
- ```
16
-
17
- #### 2. build ncnn
18
12
Just follow the ncnn official tutoral of [ build-for-linux] ( https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-linux ) to install ncnn. Following steps are all carried out on my raspberry pi:
19
13
20
14
** step 1:** install dependencies
@@ -25,21 +19,26 @@ $ sudo apt install build-essential git cmake libprotobuf-dev protobuf-compiler l
25
19
** step 2:** (optional) install vulkan
26
20
27
21
** step 3:** build
28
- I am using commit ` 5725c028c0980efd ` , and I have not tested over other commits.
22
+ I am using commit ` 6869c81ed3e7170dc0 ` , and I have not tested over other commits.
29
23
```
30
24
$ git clone https://github.com/Tencent/ncnn.git
31
25
$ cd ncnn
32
- $ git reset --hard 5725c028c0980efd
26
+ $ git reset --hard 6869c81ed3e7170dc0
33
27
$ git submodule update --init
34
28
$ mkdir -p build
35
29
$ cmake -DCMAKE_BUILD_TYPE=Release -DNCNN_VULKAN=OFF -DNCNN_BUILD_TOOLS=ON -DCMAKE_TOOLCHAIN_FILE=../toolchains/pi3.toolchain.cmake ..
36
30
$ make -j2
37
31
$ make install
38
32
```
39
33
40
- ### Convert model, build and run the demo
34
+ ### Convert pytorch model to ncnn model
41
35
42
- #### 1. convert pytorch model to ncnn model via onnx
36
+ #### 1. dependencies
37
+ ```
38
+ $ python -m pip install onnx-simplifier
39
+ ```
40
+
41
+ #### 2. convert pytorch model to ncnn model via onnx
43
42
On your training platform:
44
43
```
45
44
$ cd BiSeNet/
@@ -52,13 +51,21 @@ Then copy your `model_v2_sim.onnx` from training platform to raspberry device.
52
51
On raspberry device:
53
52
```
54
53
$ /path/to/ncnn/build/tools/onnx/onnx2ncnn model_v2_sim.onnx model_v2_sim.param model_v2_sim.bin
55
- $ cd BiSeNet/ncnn/
56
- $ mkdir -p models
57
- $ mv model_v2_sim.param models/
58
- $ mv model_v2_sim.bin models/
59
54
```
60
55
61
- #### 2. compile demo code
56
+ You can optimize the ncnn model by fusing the layers and save the weights with fp16 datatype.
57
+ On raspberry device:
58
+ ```
59
+ $ /path/to/ncnn/build/tools/ncnnoptimize model_v2_sim.param model_v2_sim.bin model_v2_sim_opt.param model_v2_sim_opt.bin 65536
60
+ $ mv model_v2_sim_opt.param model_v2_sim.param
61
+ $ mv model_v2_sim_opt.bin model_v2_sim.bin
62
+ ```
63
+
64
+ You can also quantize the model for int8 inference, following this [ tutorial] ( https://github.com/Tencent/ncnn/wiki/quantized-int8-inference ) . Make sure your device support int8 inference.
65
+
66
+
67
+ ### build and run the demo
68
+ #### 1. compile demo code
62
69
On raspberry device:
63
70
```
64
71
$ mkdir -p BiSeNet/ncnn/build
@@ -67,7 +74,7 @@ $ cmake .. -DNCNN_ROOT=/path/to/ncnn/build/install
67
74
$ make
68
75
```
69
76
70
- #### 3 . run demo
77
+ #### 2 . run demo
71
78
```
72
79
./segment
73
80
```
0 commit comments