File tree 1 file changed +28
-1
lines changed
1 file changed +28
-1
lines changed Original file line number Diff line number Diff line change @@ -104,7 +104,6 @@ python demo.py --input_res 512 --arch resdcn_101 ctdet --demo /path/to/image/or/
104
104
python demo.py --input_res 512 --arch dla_34 ctdet --demo /path/to/image/or/folder/or/video/or/webcam --load_model ../models/ctdet_coco_dla_2x.pth --exp_wo --exp_wo_dim 512
105
105
```
106
106
### 4)Export weights for MobileNetSSD
107
-
108
107
To get the weights needed to run Mobilenet tests use [ this] ( https://github.com/mive93/pytorch-ssd ) fork of a Pytorch implementation of SSD network.
109
108
110
109
```
@@ -113,6 +112,34 @@ cd pytorch-ssd
113
112
conda env create -f env_mobv2ssd.yml
114
113
python run_ssd_live_demo.py mb2-ssd-lite <pth-model-fil> <labels-file>
115
114
```
115
+
116
+ ## Darknet Parser
117
+ tkDNN implement and easy parser for darknet cfg files, a network can be converted with * tk::dnn::darknetParser* :
118
+ ```
119
+ // example of parsing yolo4
120
+ tk::dnn::Network *net = tk::dnn::darknetParser("yolov4.cfg", "yolov4/layers", "coco.names");
121
+ net->print();
122
+ ```
123
+ All models from darknet are now parsed directly from cfg, you still need to export the weights with the descripted tools in the previus section.
124
+ <details >
125
+ <summary >Supported layers</summary >
126
+ convolutional
127
+ maxpool
128
+ avgpool
129
+ shortcut
130
+ upsample
131
+ route
132
+ reorg
133
+ region
134
+ yolo
135
+ </details >
136
+ <details >
137
+ <summary >Supported activations</summary >
138
+ relu
139
+ leaky
140
+ mish
141
+ </details >
142
+
116
143
## Run the demo
117
144
118
145
To run the an object detection demo follow these steps (example with yolov3):
You can’t perform that action at this time.
0 commit comments