Skip to content

Commit fddf6b0

Browse files
authored
yolact readme fixes (#479)
1 parent 30d0721 commit fddf6b0

File tree

2 files changed

+3
-10
lines changed

2 files changed

+3
-10
lines changed

integrations/dbolya-yolact/README.md

+2-9
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,6 @@ The techniques include, but are not limited to:
3535
We recommend using a [virtual environment](https://docs.python.org/3/library/venv.html) to keep your project dependencies isolated.
3636
A virtual environment can be created using the following commands:
3737

38-
Note: This integration needs `python>=3.7,<3.10` for it's requirements
3938
```bash
4039
python3 -m venv venv # create a venv virtual environment
4140
source venv/bin/activate # activate venv
@@ -49,6 +48,8 @@ bash setup_integration.sh
4948

5049
The `setup_integration.sh` file will clone the yolact-sparseml integration repository. After the repo has successfully cloned, all dependencies from the `yolact/requirements.txt` file will install in your current environment.
5150

51+
Note: This integration requires `python>=3.7,<3.10`
52+
5253

5354
## Quick Tour
5455

@@ -101,14 +102,6 @@ python export.py --checkpoint ./quantized-yolact/model.pth \
101102
--name quantized-yolact.onnx
102103
```
103104

104-
To prevent the conversion of a QAT (Quantization-Aware Training) Graph to a
105-
Quantized Graph, pass in the `--no-qat` flag:
106-
107-
```bash
108-
python export.py --checkpoint ./quantized-yolact/model.pth \
109-
--name qat-yolact.onnx \
110-
--skip-qat-convert
111-
```
112105

113106
### DeepSparse
114107

integrations/dbolya-yolact/tutorials/sparsifying_yolact_using_recipes.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -189,7 +189,7 @@ The table below compares these tradeoffs and shows how to run them on the COCO d
189189
- When running quantized models, the memory footprint for training will significantly increase (roughly 3x). It is recommended to train at a high batch size at first. This will fail with an out-of-memory exception once quantization starts. Once this happens, use the weights from that run to resume training with lower batch size.
190190

191191
3. To begin applying one of the recipes, use the `--recipe` argument within the YOLACT [train script](https://github.com/neuralmagic/yolact/blob/master/train.py).
192-
The recipe argument is combined with our previous training command and COCO pre-trained weights to run the recipes over the model. For example, a command for YOLACT would look like this:
192+
The recipe argument is combined with our previous training command and COCO pre-trained weights to run the recipes over the model. For example, a command for pruning YOLACT would look like this:
193193
```bash
194194
python train.py \
195195
--recipe=../recipes/yolact.pruned.yaml \

0 commit comments

Comments
 (0)