Samples are simple applications that demonstrate how to use the Intel® DL Streamer. The samples are available in the /opt/intel/dlstreamer/samples directory.
Samples separated into several categories
- gst_launch command-line samples (samples construct GStreamer pipeline via gst-launch-1.0 command-line utility)
- Face Detection And Classification Sample - constructs object detection and classification pipeline example with gvadetect and gvaclassify elements to detect faces and estimate age, gender, emotions and landmark points
- Audio Event Detection Sample - constructs audio event detection pipeline example with gvaaudiodetect element and uses gvametaconvert, gvametapublish elements to convert audio event metadata with inference results into JSON format and to print on standard out
- Audio Transcription Sample - performs audio transcription using OpenVino GenAI model (whisper) with gvaaudiotranscribe
- Vehicle and Pedestrian Tracking Sample - demonstrates object tracking via gvatrack element
- Human Pose Estimation Sample - demonstrates human pose estimation with full-frame inference via gvaclassify element
- Metadata Publishing Sample - demonstrates how gvametaconvert and gvametapublish elements are used for converting metadata with inference results into JSON format and publishing to file or Kafka/MQTT message bus
- gvapython face_detection_and_classification Sample - demonstrates pipeline customization with gvapython element and application provided Python script for inference post-processing
- gvapython save frames with ROI Sample - demonstrates gvapython element for saving video frames with detected objects to disk
- Action Recognition Sample - demonstrates action recognition via video_inference bin element
- Instance Segmentation Sample - demonstrates Instance Segmentation via object_detect and object_classify bin elements
- Detection with Yolo - demonstrates how to use publicly available Yolo models for object detection and classification
- Deployment of Geti™ models - demonstrates how to deploy models trained with Intel® Geti™ Platform for object detection, anomaly detection and classification tasks
- Multi-camera deployments - demonstrates how to handle video streams from multiple cameras with one instance of DL Streamer application
- gvaattachroi - demonstrates how to use gvaattachroi to define the regions on which the inference should be performed
- FPS Throttle - demonstrates how to use gvafpsthrottle element to throttle framerate independent of sink synchronization and without frame duplication or dropping
- LVM embeddings - demonstrates generation of image embeddings using the Large Vision CLIP model
- License Plate Recognition Sample - demonstrates the use of the Yolo detector together with the optical character recognition model
- Vision Language Model Sample - demonstrates how to use the
gvagenaielement with MiniCPM-V for video summerization - Radar Signal Process Sample - demonstrates how to use the
g3dradarprocesselement for millimeter-wave radar signal processing with point cloud detection, clustering, and tracking - LiDAR Parse Sample - demonstrates LiDAR parsing pipeline with
g3dlidarparseelement - Real Sense camera sample This sample demonstrates how to capture video stream from a 3D RealSense™ Depth Camera using DL Streamer's gvarealsense element.
- Custom Post-Processing Library Sample - Detection - demonstrates how to create custom post-processing library for YOLOv11 tensor outputs conversion to detection metadata using GStreamer Analytics framework
- Custom Post-Processing Library Sample - Classification - demonstrates how to create custom post-processing library for emotion classification model outputs conversion to classification metadata using GStreamer Analytics framework
- C++ samples
- Draw Face Attributes C++ Sample - constructs pipeline and sets "C" callback to access frame metadata and visualize inference results
- Python samples
- Hello DL Streamer Sample - constructs an object detection pipeline, add logic to analyze metadata and count objects and visualize results along with object count summary in a local window
- Draw Face Attributes Python Sample - constructs pipeline and sets Python callback to access frame metadata and visualize inference results
- Open Close Valve Sample - constructs pipeline with two sinks. One of them has GStreamer valve element, which is managed based object detection result and opened/closed by callback.
- ONVIF Camera Discovery Sample - demonstrates automatic discovery of ONVIF-compatible cameras on the network and launches corresponding DL Streamer pipelines for video analysis.
- Benchmark
- Benchmark Sample - measures overall performance of single-channel or multi-channel video analytics pipelines
- Concurrent use of DL Streamer and DeepStream
- Concurrent use Sample - runs pipelines on DL Streamer and/or DeepStream
Samples with C/C++ code provide build_and_run.sh shell script to build application via cmake before execution.
Other samples (without C/C++ code) provide .sh script for constructing and executing gst-launch or Python command line.
DL Streamer samples use pre-trained models from OpenVINO™ Toolkit Open Model Zoo
Before running samples, run script download_omz_models.sh once to download all models required for samples. The script located in samples top folder.
NOTE: To install all necessary requirements for
download_omz_models.shscript run this command:
python3 -m pip install --upgrade pip
python3 -m pip install openvino-dev[onnx]NOTE: To install all available frameworks run this command:
python3 -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]First command-line parameter in DL Streamer samples specifies input video and supports
- local video file
- web camera device (ex.
/dev/video0) - RTSP camera (URL starting with
rtsp://) or other streaming source (ex URL starting withhttp://)
If command-line parameter not specified, most samples by default stream video example from predefined HTTPS link, so require internet connection.
NOTE: Most samples set property
sync=falsein video sink element to disable real-time synchronization and run pipeline as fast as possible. Change tosync=trueto run pipeline with real-time speed.
In order to run samples on remote machine over SSH with X Forwarding you should force usage of ximagesink as video sink first:
source ./force_ximagesink.sh