Skip to content

Commit b9eb266

Browse files
rahul-tulibfineran
andauthored
Yolact Doc Updates (#482)
* Add: `yaml` to `md` * Update: export command Co-authored-by: Benjamin Fineran <[email protected]>
1 parent 2661015 commit b9eb266

File tree

1 file changed

+2
-3
lines changed

1 file changed

+2
-3
lines changed

integrations/dbolya-yolact/tutorials/sparsifying_yolact_using_recipes.md

+2-3
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,6 @@ The table below compares these tradeoffs and shows how to run them on the COCO d
175175
| Baseline | The baseline, pretrained model on the COCO dataset. | 0.288 | 170 MB | -- img/sec | `python train.py` |
176176
| Pruned | A highly sparse, FP32 model that recovers close to the baseline model. | 0.286 | 30.1 MB | -- img/sec | `python train.py --resume weights/model.pth --recipe ../recipe/yolact.pruned.md` |
177177
| Pruned Quantized | A highly sparse, INT8 model that recovers reasonably close to the baseline model. | 0.282 | 9.7 MB | -- img/sec | `python train.py --resume weights/model.pth --recipe ../recipe/yolact.pruned_quant.md` |
178-
** DeepSparse Performance measured on an AWS C5 instance with 24 cores, batch size 64, and 550 x 550 input with version 1.6 of the DeepSparse Engine.
179178

180179
2. Select a recipe to use on top of the pre-trained model you created.
181180

@@ -192,7 +191,7 @@ The table below compares these tradeoffs and shows how to run them on the COCO d
192191
The recipe argument is combined with our previous training command and COCO pre-trained weights to run the recipes over the model. For example, a command for pruning YOLACT would look like this:
193192
```bash
194193
python train.py \
195-
--recipe=../recipes/yolact.pruned.yaml \
194+
--recipe=../recipes/yolact.pruned.md \
196195
--resume=zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/base-none \
197196
--save_folder=./pruned
198197
```
@@ -238,7 +237,7 @@ The [`export.py` script](https://github.com/neuralmagic/yolact/blob/master/expor
238237
1. Enter the following command to load the PyTorch graph, convert to ONNX, and correct any misformatted pieces of the graph for the pruned and quantized models.
239238

240239
```bash
241-
python export.py --weights PATH_TO_SPARSIFIED_WEIGHTS
240+
python export.py --checkpoint PATH_TO_SPARSIFIED_WEIGHTS
242241
```
243242

244243
The result is a new file added next to the sparsified checkpoint with a `.onnx` extension:

0 commit comments

Comments
 (0)