You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: integrations/dbolya-yolact/tutorials/sparsifying_yolact_using_recipes.md
+2-3
Original file line number
Diff line number
Diff line change
@@ -175,7 +175,6 @@ The table below compares these tradeoffs and shows how to run them on the COCO d
175
175
| Baseline | The baseline, pretrained model on the COCO dataset. | 0.288 | 170 MB | -- img/sec |`python train.py`|
176
176
| Pruned | A highly sparse, FP32 model that recovers close to the baseline model. | 0.286 | 30.1 MB | -- img/sec |`python train.py --resume weights/model.pth --recipe ../recipe/yolact.pruned.md`|
177
177
| Pruned Quantized | A highly sparse, INT8 model that recovers reasonably close to the baseline model. | 0.282 | 9.7 MB | -- img/sec |`python train.py --resume weights/model.pth --recipe ../recipe/yolact.pruned_quant.md`|
178
-
** DeepSparse Performance measured on an AWS C5 instance with 24 cores, batch size 64, and 550 x 550 input with version 1.6 of the DeepSparse Engine.
179
178
180
179
2. Select a recipe to use on top of the pre-trained model you created.
181
180
@@ -192,7 +191,7 @@ The table below compares these tradeoffs and shows how to run them on the COCO d
192
191
The recipe argument is combined with our previous training command and COCO pre-trained weights to run the recipes over the model. For example, a commandfor pruning YOLACT would look like this:
@@ -238,7 +237,7 @@ The [`export.py` script](https://github.com/neuralmagic/yolact/blob/master/expor
238
237
1. Enter the following command to load the PyTorch graph, convert to ONNX, and correct any misformatted pieces of the graph for the pruned and quantized models.
0 commit comments