Conventions extracted from existing DLStreamer Python sample applications.
Every Python file begins with the Intel copyright and MIT license:
# ==============================================================================
# Copyright (C) 2026 Intel Corporation
#
# SPDX-License-Identifier: MIT
# ==============================================================================GObject Introspection imports follow this exact pattern (order matters):
import gi
gi.require_version("Gst", "1.0")
gi.require_version("GstAnalytics", "1.0") # only if reading analytics metadata
gi.require_version("GstBase", "1.0") # only in custom elements
gi.require_version("GstPbutils", "1.0") # only if using Discoverer
from gi.repository import GLib, Gst, GstAnalytics # pylint: disable=no-name-in-module, wrong-import-positionThe gi.require_version() calls MUST appear before any from gi.repository import.
Call Gst.init(None) exactly once, before creating any pipeline or element.
Simple apps (1–2 args): Use sys.argv directly.
if len(args) != 3:
sys.stderr.write(f"usage: {args[0]} <VIDEO_FILE> <MODEL_FILE>\n")
sys.exit(1)Complex apps (3+ args): Use argparse.
def parse_args():
parser = argparse.ArgumentParser(description="DLStreamer Sample")
parser.add_argument("--video-path", help="Path to local video")
parser.add_argument("--video-url", help="URL to download video")
parser.add_argument("--device", default="GPU")
return parser.parse_args()- File goes in
plugins/python/<element_name>.py - Class name: PascalCase (e.g.,
FrameSelection) - Element factory name: lowercase with
_pysuffix (e.g.,gvaframeselection_py) - Must end with:
GObject.type_register(ClassName)and__gstelementfactory__ = (...) - Must call
Gst.init_python()after imports - Properties use
@GObject.Propertydecorator - Transform elements subclass
GstBase.BaseTransformand implementdo_transform_ip - Bin/Sink elements subclass
Gst.Binand useGst.GhostPad
The main app must add the plugins directory to GST_PLUGIN_PATH, disable the forked
plugin scanner, and verify the Python plugin loader is available:
plugins_dir = str(Path(__file__).resolve().parent / "plugins")
if plugins_dir not in os.environ.get("GST_PLUGIN_PATH", ""):
os.environ["GST_PLUGIN_PATH"] = f"{os.environ.get('GST_PLUGIN_PATH', '')}:{plugins_dir}"
# Prevent GStreamer from forking gst-plugin-scanner (a C subprocess that cannot
# resolve Python symbols). Scanning in-process lets libgstpython.so find the
# Python runtime that is already loaded.
os.environ.setdefault("GST_REGISTRY_FORK", "no")
Gst.init(None)
reg = Gst.Registry.get()
if not reg.find_plugin("python"):
raise RuntimeError(
"GStreamer 'python' plugin not found. "
"Ensure GST_PLUGIN_PATH includes the path to libgstpython.so. "
"If error persists: rm ~/.cache/gstreamer-1.0/registry.x86_64.bin"
)- Pipeline parse errors: catch
GLib.Error - Model export failures: check subprocess return codes
- Missing files: validate paths before pipeline construction
- Pipeline runtime errors: handle in the event loop via
Gst.MessageType.ERROR
The standard pattern for reading detection metadata from a buffer:
rmeta = GstAnalytics.buffer_get_analytics_relation_meta(buffer)
if rmeta:
for mtd in rmeta:
if isinstance(mtd, GstAnalytics.ODMtd):
label = GLib.quark_to_string(mtd.get_obj_type())
_, x, y, w, h, confidence = mtd.get_location()
_, conf = mtd.get_confidence_lvl()Writing overlay metadata:
rmeta.add_od_mtd(GLib.quark_from_string("label text"), x, y, w, h, confidence)Reading classification metadata (from gvagenai):
for mtd in rmeta:
if isinstance(mtd, GstAnalytics.ClsMtd):
quark = mtd.get_quark(0)
level = mtd.get_level(0)Reading tracking metadata:
for mtd in rmeta:
if isinstance(mtd, GstAnalytics.TrackingMtd):
success, tracking_id, _, _, _ = mtd.get_info()In GStreamer ≥ 1.26, buffer.copy() returns a shallow copy with an immutable read-only data pointer.
Use buffer.copy_deep() when you need to modify buffer timestamps or data:
# WRONG — raises NotWritableMiniObject in GStreamer ≥ 1.26
rec_buffer = buffer.copy()
rec_buffer.pts = new_pts # ❌ immutable
# CORRECT — deep copy creates a fully writable buffer
rec_buffer = buffer.copy_deep()
rec_buffer.pts = new_pts # ✓ writableCheck for GPU/NPU availability before constructing the pipeline. Use the fallback chain NPU → GPU → CPU so the app works on any Intel system:
def check_device(requested, label):
"""Check device availability with fallback chain: NPU → GPU → CPU."""
if requested == "NPU" and not os.path.exists("/dev/accel/accel0"):
print(f"Warning: NPU not available for {label}, falling back to GPU")
requested = "GPU"
if requested == "GPU" and not os.path.exists("/dev/dri/renderD128"):
print(f"Warning: GPU not available for {label}, falling back to CPU")
requested = "CPU"
return requested