The Loss Prevention Pipeline System is an open-source reference implementation for building and deploying video analytics pipelines for retail loss prevention use cases. It leverages Intel® hardware and software, GStreamer, and OpenVINO™ to enable scalable, real-time object detection and classification at the edge.
- Ubuntu 24.04 or newer (Linux recommended)
- Docker
- Make (
sudo apt install make) - Intel hardware (CPU, iGPU, dGPU, NPU)
- Intel drivers (see Intel GPU drivers)
- Sufficient disk space for models, videos, and results
Clone the repo with the below command
git clone -b <release-or-tag> --single-branch https://github.com/intel-retail/loss-prevention
Replace with the version you want to clone (for example, v4.0.0).
git clone -b v4.0.0 --single-branch https://github.com/intel-retail/loss-prevention
By default the application runs by pulling the pre-built images. If you want to build the images locally and then run the application, set the flag:
REGISTRY=false
usage: make <command> REGISTRY=false (applicable for all commands like benchmark, benchmark-stream-density..)
Example: make run-lp REGISTRY=false(If this is the first time, it will take some time to download videos, models, docker images and build images)
1.1 Download the models using download_models/downloadModels.sh
make download-models1.2 Update github submodules
make update-submodules1.3 Download sample videos used by the performance tools
make download-sample-videos1.4 Run the LP application
make run-render-mode-
Run Loss Prevention appliaction with single command.
make run-lp
- Running Loss Prevention application with ENV variables:
CAMERA_STREAM=camera_to_workload_full.json WORKLOAD_DIST=workload_to_pipeline_cpu.json make run-lp
CAMERA_STREAM=camera_to_workload_full.json: runs all 6 workloads.
WORKLOAD_DIST=workload_to_pipeline_cpu.json: all workloads run on CPU.
- Running Loss Prevention application with ENV variables:
-
Follow the following steps:
make download-models REGISTRY=false make update-submodules REGISTRY=false make download-sample-videos make run-render-mode REGISTRY=false
-
The above series of commands can be executed using only one command:
make run-lp REGISTRY=false
For a comprehensive and advanced guide, refer to- Loss Prevention Documentation Guide
make down-lpBy default, the configuration is set to use the CPU. If you want to benchmark the application on GPU or NPU, please update the device value in workload_to_pipeline.json.
make benchmarkmake consolidate-metrics
cat benchmark/metrics.csvSince the GStreamer pipeline is generated dynamically based on the provided configuration(camera_to_workload and workload_to_pipeline json), the pipeline.sh file gets updated every time the user runs make run-lp or make benchmark. This ensures that the pipeline reflects the latest changes.
src/pipelines/pipeline.sh
make validate-all-configs— Validate all configuration filesmake clean-images— Remove dangling Docker imagesmake clean-containers— Remove stopped containersmake clean-all— Remove all unused Docker resources
The application is highly configurable via JSON files in the configs/ directory:
camera_to_workload.json: Maps each camera to one or more workloads. To add or remove a camera, edit thelane_config.camerasarray in this file. Each camera entry can specify its video source, region of interest, and assigned workloads.- Example:
{ "lane_config": { "cameras": [ { "camera_id": "cam1", "fileSrc": "sample-media/video1.mp4", "workloads": ["items_in_basket", "multi_product_identification"], "region_of_interest": {"x": 100, "y": 100, "x2": 800, "y2": 600} }, ... ] } }
- Example:
workload_to_pipeline.json: Maps each workload name to a pipeline definition (sequence of GStreamer elements and models). To add or update a workload, edit theworkload_pipeline_mapin this file.- Example:
{ "workload_pipeline_map": { "items_in_basket": [ {"type": "gvadetect", "model": "yolo11n", "precision": "INT8", "device": "CPU"}, {"type": "gvaclassify", "model": "efficientnet-v2-b0", "precision": "INT8", "device": "CPU"} ], ... } }
- Example:
To try a new camera or workload:
- Edit
configs/camera_to_workload.jsonto add your camera and assign workloads. - Edit
configs/workload_to_pipeline.jsonto define or update the pipeline for your workload. - (Optional) Place your video files in the appropriate directory and update the
fileSrcpath. - Re-run the pipeline as described above.
configs/— Configuration files (camera/workload mapping, pipeline mapping)docker/— Dockerfiles for downloader and pipeline containersdocs/— Documentation (HLD, LLD, system design)download-scripts/— Scripts for downloading models and videossrc/— Main source code and pipeline runner scriptsMakefile— Build automation and workflow commands