|
| 1 | +# Export and Optimize Geti Model |
| 2 | + |
| 3 | +## Overview |
| 4 | + |
| 5 | +This guide starts by downloading the trained YOLOX PyTorch weights from Intel Geti and the COCO dataset used during training. A workspace is then set up and the [Training Extensions](https://github.com/open-edge-platform/training_extensions) repository is cloned, which provides the conversion script. After installing the required Python and Rust dependencies, the `export_and_optimize.py` script is run to convert the model to OpenVINO IR format — producing a full-precision FP32 model and an INT8 post-training quantized model optimized for Intel hardware. |
| 6 | + |
| 7 | +--- |
| 8 | + |
| 9 | +## Prerequisites |
| 10 | + |
| 11 | +Before you begin, ensure you have the following: |
| 12 | + |
| 13 | +- A trained model exported from Intel Geti as a **PyTorch weights file** (`.pth`) |
| 14 | + |
| 15 | +  |
| 16 | + |
| 17 | + *Note: Image is for illustration purposes only.* |
| 18 | + |
| 19 | +- A **COCO-format dataset** (`.zip`) used during training (required for post-training optimization) |
| 20 | + |
| 21 | +  |
| 22 | + |
| 23 | +  |
| 24 | + |
| 25 | + *Note: Images are for illustration purposes only.* |
| 26 | + |
| 27 | +- [Git](https://git-scm.com/) installed |
| 28 | +- Internet access to download dependencies |
| 29 | + |
| 30 | +--- |
| 31 | + |
| 32 | +## Step 1: Set Up the Workspace |
| 33 | + |
| 34 | +Create the working directory structure: |
| 35 | + |
| 36 | +```bash |
| 37 | +mkdir generate_model |
| 38 | +cd generate_model |
| 39 | + |
| 40 | +mkdir model |
| 41 | +mkdir coco_dataset |
| 42 | +mkdir output |
| 43 | +``` |
| 44 | + |
| 45 | +| Directory | Purpose | |
| 46 | +|----------------|----------------------------------------------| |
| 47 | +| `model/` | Stores the downloaded PyTorch weights file | |
| 48 | +| `coco_dataset/`| Stores the COCO dataset used for optimization| |
| 49 | +| `output/` | Stores the exported and optimized model files| |
| 50 | + |
| 51 | +--- |
| 52 | + |
| 53 | +## Step 2: Add Model Weights and Dataset |
| 54 | + |
| 55 | +### Copy and Extract the PyTorch Model |
| 56 | + |
| 57 | +Place the downloaded `Pytorch_model.zip` file into the `model/` directory and extract it: |
| 58 | + |
| 59 | +```bash |
| 60 | +# Copy Pytorch_model.zip into the model directory, then unzip |
| 61 | +cp /path/to/Pytorch_model.zip model/ |
| 62 | +cd model |
| 63 | +unzip Pytorch_model.zip |
| 64 | +cd .. |
| 65 | +``` |
| 66 | + |
| 67 | +After extraction, the `model/` directory should contain a `weights.pth` file. |
| 68 | + |
| 69 | +### Copy and Extract the COCO Dataset |
| 70 | + |
| 71 | +Place the downloaded COCO dataset archive into the `coco_dataset/` directory and extract it: |
| 72 | + |
| 73 | +```bash |
| 74 | +# Copy the COCO dataset zip into the coco_dataset directory, then unzip |
| 75 | +cp /path/to/<coco_dataset>.zip coco_dataset/ |
| 76 | +cd coco_dataset |
| 77 | +unzip <coco_dataset>.zip |
| 78 | +cd .. |
| 79 | +``` |
| 80 | + |
| 81 | +After extraction, the `coco_dataset/` directory should follow the standard COCO layout: |
| 82 | + |
| 83 | +``` |
| 84 | +coco_dataset/ |
| 85 | +├── annotations/ |
| 86 | +└── images/ |
| 87 | +``` |
| 88 | + |
| 89 | +--- |
| 90 | + |
| 91 | +## Step 3: Clone the Training Extensions Repository |
| 92 | + |
| 93 | +```bash |
| 94 | +git clone https://github.com/open-edge-platform/training_extensions.git |
| 95 | +``` |
| 96 | + |
| 97 | +--- |
| 98 | + |
| 99 | +## Step 4: Install Dependencies |
| 100 | + |
| 101 | +### Install `uv` (Python Package Manager) |
| 102 | + |
| 103 | +```bash |
| 104 | +curl -LsSf https://astral.sh/uv/install.sh | sh |
| 105 | +source $HOME/.local/bin/env |
| 106 | +``` |
| 107 | + |
| 108 | +### Install Rust Toolchain (required by some dependencies) |
| 109 | + |
| 110 | +```bash |
| 111 | +curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y |
| 112 | +``` |
| 113 | + |
| 114 | +--- |
| 115 | + |
| 116 | +## Step 5: Set Up the Python Environment |
| 117 | + |
| 118 | +Navigate to the `library` directory within the cloned repository and check out the required branch: |
| 119 | + |
| 120 | +```bash |
| 121 | +cd training_extensions/library |
| 122 | +git checkout kp/test_yolox |
| 123 | +``` |
| 124 | + |
| 125 | +Create and activate a virtual environment, then sync all dependencies: |
| 126 | + |
| 127 | +```bash |
| 128 | +uv venv |
| 129 | +source .venv/bin/activate |
| 130 | +source "$HOME/.cargo/env" |
| 131 | +uv sync |
| 132 | +``` |
| 133 | + |
| 134 | +--- |
| 135 | + |
| 136 | +## Step 6: Export and Optimize the Model |
| 137 | + |
| 138 | +Run the `export_and_optimize.py` script with the appropriate paths and model configuration: |
| 139 | + |
| 140 | +```bash |
| 141 | +python export_and_optimize.py \ |
| 142 | + --weights /path/to/model/weights.pth \ |
| 143 | + --source_dataset /path/to/coco_dataset \ |
| 144 | + --output_dir /path/to/output \ |
| 145 | + --model_name yolox_tiny |
| 146 | +``` |
| 147 | + |
| 148 | +### Arguments |
| 149 | + |
| 150 | +| Argument | Required | Description | |
| 151 | +|--------------------|----------|----------------------------------------------------------------| |
| 152 | +| `--weights` | Yes | Path to the PyTorch weights file (`.pth`) | |
| 153 | +| `--source_dataset` | Yes | Path to the COCO dataset directory | |
| 154 | +| `--output_dir` | Yes | Directory where exported and optimized model files are saved | |
| 155 | +| `--model_name` | Yes | Model variant to use. Supported values: `yolox_tiny`, `yolox_s`, `yolox_l`, `yolox_x` (default: `yolox_tiny`) | |
| 156 | + |
| 157 | +### Example with Absolute Paths |
| 158 | + |
| 159 | +Assuming the workspace is located at `~/generate_model`: |
| 160 | + |
| 161 | +```bash |
| 162 | +python export_and_optimize.py \ |
| 163 | + --weights ~/generate_model/model/weights.pth \ |
| 164 | + --source_dataset ~/generate_model/coco_dataset \ |
| 165 | + --output_dir ~/generate_model/output \ |
| 166 | + --model_name yolox_tiny |
| 167 | +``` |
| 168 | + |
| 169 | +--- |
| 170 | + |
| 171 | +## Output |
| 172 | + |
| 173 | +After the script completes, the `output/` directory will contain the exported and optimized model files ready for deployment in the Pallet Defect Detection pipeline: |
| 174 | + |
| 175 | +``` |
| 176 | +output/ |
| 177 | +├── otx-workspace/ |
| 178 | +│ ├── exported_model.xml # FP32 – full-precision exported model |
| 179 | +│ └── optimized_model.xml # INT8 – post-training quantized model |
| 180 | +``` |
| 181 | + |
| 182 | +| File | Precision | Description | |
| 183 | +|-----------------------|-----------|--------------------------------------------------------------------------| |
| 184 | +| `exported_model.xml` | FP32 | Full-precision model exported directly from the PyTorch weights | |
| 185 | +| `optimized_model.xml` | INT8 | Post-training quantized model optimized using the COCO dataset | |
| 186 | + |
| 187 | +Both files can be used directly with the OpenVINO inference engine. The INT8 model (`optimized_model.xml`) offers faster inference with reduced memory footprint, while the FP32 model (`exported_model.xml`) retains full numerical precision. |
| 188 | + |
| 189 | + |
| 190 | + |
| 191 | +*Note: Image is for illustration purposes only.* |
| 192 | + |
| 193 | +--- |
| 194 | + |
| 195 | +## Troubleshooting |
| 196 | + |
| 197 | +| Issue | Resolution | |
| 198 | +|------------------------------------|----------------------------------------------------------------------------| |
| 199 | +| `uv: command not found` | Re-run `source $HOME/.local/bin/env` or open a new terminal session | |
| 200 | +| Rust compilation errors | Ensure `source "$HOME/.cargo/env"` was run after the Rust installation | |
| 201 | +| Dataset not found | Verify the COCO dataset was extracted and the `annotations/` folder exists | |
| 202 | +| Incorrect model output | Confirm `--model_name` matches the architecture used during Geti training | |
0 commit comments