Model installer provides users with AI models for SceneScape by:
- downloading the configured set of models from OpenVINO Model Zoo
- downloading/generating necessary configuration files to integrate them with SceneScape services
The models and configuration files are downloaded into a models volume that is attached to SceneScape services for both Docker and Kubernetes deployments.
Model installer downloads the supported model set defined in install-omz-models (_DEFAULT_MODELS) and can be configured with the following parameters:
| Parameter | Allowed Values | Format | Description |
|---|---|---|---|
precisions |
FP32, FP16, INT8 |
Comma-separated list | Model precision formats to download. Multiple precisions can be specified for the same model (e.g., FP16,FP32). The first one will be used as preferred when generating model-config.json |
model_proc |
true, false |
Single value | When enabled, attempts to download model-proc JSON files for each supported model and precision. |
For Kubernetes deployment refer to the initModels section in Helm chart values, for example use --set initModels.modelPrecisions=FP16,FP32 --set initModels.modelProc=true when installing the Helm chart.
For Docker deployment use PRECISIONS environment variable when building, e.g.: make install-models or make install-models PRECISIONS="FP16,FP32".
models/
├── intel/
│ ├── model-name-1/
│ │ ├── FP16/
│ │ │ ├── model-name-1.xml (OpenVINO model)
│ │ │ ├── model-name-1.bin (OpenVINO model)
│ │ │ └── model-name-1.json (model-proc file - required only for selected models)
│ │ └── FP32/
│ │ └── ...
│ └── model-name-2/
│ └── ...
├── public/
│ └── model-name-3/
│ └── ...
└── model_configs/
└── model_config.json (auto-generated default model configuration file)
For detailed information about the file format and its usage, refer to the Model Configuration File Format documentation.
The function automatically assigns metadata policies and element types based on model names:
| Model Pattern | Metadata Policy | Type | Description |
|---|---|---|---|
| detection, detector, detect | detectionPolicy |
detect | Object detection models |
| text + detection | ocrPolicy |
detect | Text detection models |
| reidentification, reid | reidPolicy |
inference | Person/object re-identification |
| recognition, attributes, classification | classificationPolicy |
classify | Classification and attribute recognition |
| text + recognition | ocrPolicy |
classify | Text recognition models |
| pose | detection3DPolicy |
inference | Human pose estimation |
The generate_model_config.py file includes a predefined mapping for shorter, more convenient model names. The mapping is defined in the _MODEL_NAME_MAP variable.
If a model name exists in this mapping, the shortened name will be used as the key in the configuration. Otherwise, the original behavior (replacing hyphens with underscores) is used.