Releases: open-edge-platform/edge-ai-libraries
Edge AI Libraries 2025.2
As the third official release of the platform, 2025.2 focuses on expanding and solidifying the range of applications it offers. It enables full validation and support for Intel® Core™ Ultra Series 2 processors, utilizing NPU, iGPU, and CPU to accelerate vision AI model training and fine-tuning. Broad camera support for multi-camera media streams and faster fps, plus new sample applications using transformer-based AI for speech recognition and real-time scene intelligence.
Both Robotics AI and Education AI suites are now fully available. The first one gives you a range of design ideas and solutions for creating various AI-driven machines, while the other targets a specific use case of Smart Classroom, its first sample application, to improve both teaching and learning experience.
For full information on Open Edge Platform Release 2025.2, see the What's New article.
Edge AI Libraries v1.2.0
Edge AI Libraries v1.2.0
Release Overview
Edge AI Libraries v1.2.0 includes libraries, microservices, and tools for edge application development, with sample applications across multiple industries.
Key Components
DL Streamer
Adds support for custom post-processing, latency modes, visual embeddings, INT8 quantization, Windows 11, and Edge Microvisor Toolkit. Includes new model support (e.g., Clip-ViT, miniCPM2.6, YOLOv8 license plate detector) and new GStreamer elements (e.g., gstgenai, gvarealsense). Further details can be found in the DL-streamer release notes.
Microservices
- DL Streamer Pipeline Server
- Model Registry
- Data Ingestion Service (supports PDF, DOCX, TXT)
- Time Series Analytics
DL Streamer Pipeline Server (v3.1.0)
- New Features:
- Support for Ubuntu 22.04 and Ubuntu 24.04 based Docker images.
- Separate optimized and extended runtime Docker images.
- InfluxDB publisher for storing metadata.
- OPCUA configuration now available via REST API.
- WebRTC bitrate is configurable.
- ROS2 publisher for sending metadata (with or without encoded frames).
- VA-API pipelines enabled for RTSP and WebRTC streaming.
- Real-time log monitoring via OpenTelemetry.
- Fixes:
- Removed confidential info and deprecated tools (e.g., unused model downloader, gRPC interface).
- Fixed synchronization issues with appsink and publisher configurations.
- WebRTC GPU inferencing now gracefully falls back to CPU if vah264enc is missing.
- Updates:
- DL Streamer upgraded to v2025.1.2.
- Model Registry interface now uses environment variables instead of config.json.
- Documentation improvements: cross-stream batching, latency tracing, and pipeline management tutorials.
Time Series Analytics Microservice (v1.0.0)
- Deployment Options:
- Docker Compose on a single node.
- Helm on a single-node Kubernetes cluster.
- Features:
- Bring Your Own Data & UDFs: Supports Python-based analytics logic using Kapacitor UDF standards.
- Seamless Integration: Automatically stores processed results in InfluxDB.
- Model Registry Support: Dynamically fetches UDF scripts, ML models, and TICKscripts.
- Versatile Use Cases: Suitable for anomaly detection, alerting, and advanced time series analytics in industrial, IoT, and enterprise environments.
Geti Platform – Release Notes Summary
v2.12.1
- Fixed handling of projects registered in v2.11.0 after upgrading to v2.12.0.
v2.12.0
- Major Features:
- Gold Support for Model Fine-Tuning on Intel® Arc™GPUs (recommended: 16GB VRAM).
- Streamlined Training Interface with advanced configuration options.
- Configurable Data Augmentation for classification tasks.
- Configurable Model Input Size for fine-tuning.
- Improved Media Upload Reliability with corruption detection and repair.
- Key Point Detection Dataset Support in Datumaro format.
- REST API Changes:
- New endpoints for project and training configuration.
- Deprecated legacy configuration endpoints.
- Updated supported algorithms response.
- New query parameter for model export control.
- Removed deprecated endpoints for model downloads and sample scripts.
- Model Deprecations:
- Detection: ATSS-ResNeXt101, RTDetr-R18/R101, RTMDet-Tiny
- Rotated Detection: MaskRCNN-ResNet50-V1
- Classification: EfficientNet-V2-L, MobileNet-V3-small
- Instance Segmentation: MaskRCNN-ResNet50-V1
- Semantic Segmentation: LiteHRNet-X
Sample Applications
- Chat Q&A Core: Foundational RAG pipeline
- Chat Q&A Modular: Microservices-based RAG pipeline
Tools
- Visual Pipeline and Platform Evaluation Tool: Adds live output and new pre-trained models
- SceneScape: Enhances spatial intelligence development with volumetric ROIs, improved tracking, and DL Streamer integration
- Geti™: Adds support for training computer vision models on Intel® Arc™ Graphics (B580 and A770), as well as fine tuning new transformer architecture and Key point detection model for pose estimation. Users can now also install Geti on Windows Subsystem for Linux (WSL) environments. Detailed release notes are available here.
Known Issues and Limitations
- DL Streamer: https://github.com/open-edge-platform/edge-ai-libraries/blob/main/libraries/dl-streamer/RELEASE_NOTES.md#known-issues
- SceneScape: https://github.com/open-edge-platform/scenescape/blob/main/docs/user-guide/release-notes.md#known-issues
- Visual Pipeline Tool: Metrics displayed only for the last GPU in multi-GPU systems
Breaking Changes
- None reported for this release.
Edge AI Libraries v1.0.0
Edge AI Libraries v1.0.0 (Initial Release)
Release Overview
The Edge AI Libraries v1.0.0 hosts a collection of libraries, microservices, and tools for Edge application development. This project also includes sample applications to showcase the generic AI use cases.
Key Components
| Component | Category | Get Started | Developers Docs |
|---|---|---|---|
| Deep Learning Streamer | Library | Link | API Reference |
| Deep Learning Streamer Pipeline Server | Microservice | Link | API Reference |
| Document Ingestion | Microservice | Link | API Reference |
| Model Registry | Microservice | Link | API Reference |
| Object Store | Microservice | Link | Usage |
| Visual Pipeline and Performance Evaluation Tool | Tool | Link | Build instructions |
| Chat Question and Answer | Sample Application | Link | Build instructions |
| Chat Question and Answer Core | Sample Application | Link | Build instructions |
Highlighted Features
Libraries includes:
- Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework is an open-source streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines for the Cloud or at the Edge.
Microservices includes:
- Deep Learning Streamer Pipeline Server: Built on top of GStreamer, a containerized microservice for development and deployment of video analytics pipeline.
- Model Registry: Providing capabilities to manage lifecycle of an AI model.
- Object Store Microservice: MinIO based object store microservice to build generative AI pipelines.
- Data Ingestion microservice: Data ingestion service loads, parses, and creates embeddings for popular document types like pdf, docx, and txt files.
Sample applications includes:
- Chat Question-and-Answer Core: Chat Question-and-Answer sample application is a foundational Retrieval-Augmented Generation (RAG) pipeline that allows users to ask questions and receive answers, including those based on their own private data corpus.
- Chat Question-and-Answer: Compared to the Chat Question-and-Answer Core implementation, this implementation of Chat Question-and-Answer is a modular microservices based approach with each constituent element of the RAG pipeline bundled as an independent microservice.
Tools includes:
- Visual Pipeline and Platform Evaluation Tool: The Visual Pipeline and Platform Evaluation Tool simplifies hardware selection for AI workloads by allowing you to configure workload parameters, benchmark performance, and analyze key metrics such as throughput, CPU, and GPU usage. With its intuitive interface, the tool provides actionable insights to help you optimize hardware selection and performance.
Known Issues
None
Breaking Changes
None — this is the initial release.