Skip to content

Commit 4960dfa

Browse files
ajagadi1athina98svamsik
authored
Manufacturing and Metro Vision apps: Sync main branch with latest changes from release-2026.0.0 (open-edge-platform#2154)
Co-authored-by: Athina Saha <athina.saha@intel.com> Co-authored-by: svamsik <vamsi.krishna.sammeta@intel.com>
1 parent fb24a5e commit 4960dfa

File tree

28 files changed

+11124
-23366
lines changed

28 files changed

+11124
-23366
lines changed
1.96 MB
Loading
83.4 KB
Loading
Loading
18.5 KB
Loading
Lines changed: 202 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,202 @@
1+
# Export and Optimize Geti Model
2+
3+
## Overview
4+
5+
This guide starts by downloading the trained YOLOX PyTorch weights from Intel Geti and the COCO dataset used during training. A workspace is then set up and the [Training Extensions](https://github.com/open-edge-platform/training_extensions) repository is cloned, which provides the conversion script. After installing the required Python and Rust dependencies, the `export_and_optimize.py` script is run to convert the model to OpenVINO IR format — producing a full-precision FP32 model and an INT8 post-training quantized model optimized for Intel hardware.
6+
7+
---
8+
9+
## Prerequisites
10+
11+
Before you begin, ensure you have the following:
12+
13+
- A trained model exported from Intel Geti as a **PyTorch weights file** (`.pth`)
14+
15+
![Download PyTorch Weights from Intel Geti](../_assets/download_model_pytorch_weights.png)
16+
17+
*Note: Image is for illustration purposes only.*
18+
19+
- A **COCO-format dataset** (`.zip`) used during training (required for post-training optimization)
20+
21+
![Download COCO Dataset - Step 1](../_assets/download_coco_datasets1.png)
22+
23+
![Download COCO Dataset - Step 2](../_assets/download_coco_datasets2.png)
24+
25+
*Note: Images are for illustration purposes only.*
26+
27+
- [Git](https://git-scm.com/) installed
28+
- Internet access to download dependencies
29+
30+
---
31+
32+
## Step 1: Set Up the Workspace
33+
34+
Create the working directory structure:
35+
36+
```bash
37+
mkdir generate_model
38+
cd generate_model
39+
40+
mkdir model
41+
mkdir coco_dataset
42+
mkdir output
43+
```
44+
45+
| Directory | Purpose |
46+
|----------------|----------------------------------------------|
47+
| `model/` | Stores the downloaded PyTorch weights file |
48+
| `coco_dataset/`| Stores the COCO dataset used for optimization|
49+
| `output/` | Stores the exported and optimized model files|
50+
51+
---
52+
53+
## Step 2: Add Model Weights and Dataset
54+
55+
### Copy and Extract the PyTorch Model
56+
57+
Place the downloaded `Pytorch_model.zip` file into the `model/` directory and extract it:
58+
59+
```bash
60+
# Copy Pytorch_model.zip into the model directory, then unzip
61+
cp /path/to/Pytorch_model.zip model/
62+
cd model
63+
unzip Pytorch_model.zip
64+
cd ..
65+
```
66+
67+
After extraction, the `model/` directory should contain a `weights.pth` file.
68+
69+
### Copy and Extract the COCO Dataset
70+
71+
Place the downloaded COCO dataset archive into the `coco_dataset/` directory and extract it:
72+
73+
```bash
74+
# Copy the COCO dataset zip into the coco_dataset directory, then unzip
75+
cp /path/to/<coco_dataset>.zip coco_dataset/
76+
cd coco_dataset
77+
unzip <coco_dataset>.zip
78+
cd ..
79+
```
80+
81+
After extraction, the `coco_dataset/` directory should follow the standard COCO layout:
82+
83+
```
84+
coco_dataset/
85+
├── annotations/
86+
└── images/
87+
```
88+
89+
---
90+
91+
## Step 3: Clone the Training Extensions Repository
92+
93+
```bash
94+
git clone https://github.com/open-edge-platform/training_extensions.git
95+
```
96+
97+
---
98+
99+
## Step 4: Install Dependencies
100+
101+
### Install `uv` (Python Package Manager)
102+
103+
```bash
104+
curl -LsSf https://astral.sh/uv/install.sh | sh
105+
source $HOME/.local/bin/env
106+
```
107+
108+
### Install Rust Toolchain (required by some dependencies)
109+
110+
```bash
111+
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
112+
```
113+
114+
---
115+
116+
## Step 5: Set Up the Python Environment
117+
118+
Navigate to the `library` directory within the cloned repository and check out the required branch:
119+
120+
```bash
121+
cd training_extensions/library
122+
git checkout kp/test_yolox
123+
```
124+
125+
Create and activate a virtual environment, then sync all dependencies:
126+
127+
```bash
128+
uv venv
129+
source .venv/bin/activate
130+
source "$HOME/.cargo/env"
131+
uv sync
132+
```
133+
134+
---
135+
136+
## Step 6: Export and Optimize the Model
137+
138+
Run the `export_and_optimize.py` script with the appropriate paths and model configuration:
139+
140+
```bash
141+
python export_and_optimize.py \
142+
--weights /path/to/model/weights.pth \
143+
--source_dataset /path/to/coco_dataset \
144+
--output_dir /path/to/output \
145+
--model_name yolox_tiny
146+
```
147+
148+
### Arguments
149+
150+
| Argument | Required | Description |
151+
|--------------------|----------|----------------------------------------------------------------|
152+
| `--weights` | Yes | Path to the PyTorch weights file (`.pth`) |
153+
| `--source_dataset` | Yes | Path to the COCO dataset directory |
154+
| `--output_dir` | Yes | Directory where exported and optimized model files are saved |
155+
| `--model_name` | Yes | Model variant to use. Supported values: `yolox_tiny`, `yolox_s`, `yolox_l`, `yolox_x` (default: `yolox_tiny`) |
156+
157+
### Example with Absolute Paths
158+
159+
Assuming the workspace is located at `~/generate_model`:
160+
161+
```bash
162+
python export_and_optimize.py \
163+
--weights ~/generate_model/model/weights.pth \
164+
--source_dataset ~/generate_model/coco_dataset \
165+
--output_dir ~/generate_model/output \
166+
--model_name yolox_tiny
167+
```
168+
169+
---
170+
171+
## Output
172+
173+
After the script completes, the `output/` directory will contain the exported and optimized model files ready for deployment in the Pallet Defect Detection pipeline:
174+
175+
```
176+
output/
177+
├── otx-workspace/
178+
│ ├── exported_model.xml # FP32 – full-precision exported model
179+
│ └── optimized_model.xml # INT8 – post-training quantized model
180+
```
181+
182+
| File | Precision | Description |
183+
|-----------------------|-----------|--------------------------------------------------------------------------|
184+
| `exported_model.xml` | FP32 | Full-precision model exported directly from the PyTorch weights |
185+
| `optimized_model.xml` | INT8 | Post-training quantized model optimized using the COCO dataset |
186+
187+
Both files can be used directly with the OpenVINO inference engine. The INT8 model (`optimized_model.xml`) offers faster inference with reduced memory footprint, while the FP32 model (`exported_model.xml`) retains full numerical precision.
188+
189+
![Generated Model Output](../_assets/generated_model.png)
190+
191+
*Note: Image is for illustration purposes only.*
192+
193+
---
194+
195+
## Troubleshooting
196+
197+
| Issue | Resolution |
198+
|------------------------------------|----------------------------------------------------------------------------|
199+
| `uv: command not found` | Re-run `source $HOME/.local/bin/env` or open a new terminal session |
200+
| Rust compilation errors | Ensure `source "$HOME/.cargo/env"` was run after the Rust installation |
201+
| Dataset not found | Verify the COCO dataset was extracted and the `annotations/` folder exists |
202+
| Incorrect model output | Confirm `--model_name` matches the architecture used during Geti training |

manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -84,6 +84,7 @@ get-started
8484
how-to-guides
8585
api-reference
8686
troubleshooting
87+
export-and-optimize-geti-model
8788
release-notes
8889
8990
:::

manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/troubleshooting.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -97,6 +97,12 @@ privileged_access_required: true
9797

9898
To perform inferencing on an NPU device (for platforms with NPU accelerators such as Ultra Core processors), ensure you have completed the required pre-requisites. Refer to the relevant [DL Streamer instructions](https://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/dlstreamer/dev_guide/advanced_install/advanced_install_guide_prerequisites.html#optional-prerequisite-2-install-intel-npu-drivers) to install Intel NPU drivers.
9999

100+
## NPU Inference Failures with Geti-Trained Models
101+
102+
If you experience errors or failures when running an NPU workload with a model trained in Intel Geti, this may be caused by **Non-Maximum Suppression (NMS)** being embedded within the model graph. The NPU does not support dynamic shapes, and NMS operations with dynamic output shapes are incompatible with NPU execution.
103+
104+
**Resolution**: Follow the [Export and Optimize Geti Model](./how-to-guides/export-and-optimize-geti-model.md) guide to generate a model with NMS removed from the model graph. NMS will then be handled by DL Streamer.
105+
100106
## Unable to parse JSON payload due to missing `jq` package
101107

102108
While running the `sample_start.sh` script, you may encounter
Loading
Loading
Loading

0 commit comments

Comments
 (0)