Go Bigger Faster: Performance Advances and KPIs Tied to Release 2025.2 #1368
stevenhoenisch
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
The latest release of Edge Manageability Framework demonstrates performance advances for edge AI workloads by publishing KPIs, including expanded support for performance assessments of reference applications like pallet defect detection, image-based video search, chat Q&A, and smart parking.
These performance indicators help you, as system builders and DevOps engineers, assess deployment scenarios on Intel processors to ensure that you can take advantage of the latest hardware capabilities. In addition, the 2025.2 release delivers alpha support for Intel® Core™ Ultra Series 3 processors (formerly code-named Panther Lake) supporting integrated GPU and NPU.
Other improvements can boost the performance of Edge Manageability Framework for your own performance needs and measurements. For the first time, the 2025.2 release lets you selectively omit layers of the stack to reduce resource requirements. If that isn't enough, the release includes customization capabilities for the operating system kernel command-line parameters.
These new capabilities and the performance metrics of Edge Manageability Framework help demonstrate how the platform helps solve problems in orchestrating edge infrastructure, applications, and workloads.
Problems Inherent in Edge Orchestration
One challenge is remote onboarding and provisioning of edge nodes at scale, exemplified by the following slide:
Another challenge is cluster provisioning at scale:
For more information, see the presentation titled Edge Orchestration: Challenges and Solutions on our Edge AI Resources wiki page on GitHub.
Key Performance Indicators
Here's a quick selection of results from our KPI testing for the 2025.2 release of Edge Manageability Framework.
For the cloud-based Edge Orchestrator:
Edge node onboarding and provisioning with Ubuntu was about 11 minutes on an Intel® Xeon® processor.
Edge node onboarding and provisioning with Edge Microvisor Toolkit was about 8 minutes on an Intel Xeon processor and about 5 minutes on an Intel® Core™ processor.
Edge cluster creation took about 1 minute and 30 seconds.
The on-premises numbers were faster still:
Edge node onboarding and provisioning with Ubuntu was about 12 minutes on an Intel Xeon processor.
Edge node onboarding and provisioning with Edge Microvisor Toolkit was 7 minutes on an Intel Xeon processor and about 4 minutes on Intel Core processor.
Edge cluster creation was the same on premises at about 1 minute and 30 seconds.
The Edge AI application deployment times were significantly faster than our KPI threshold of less than 10 minutes; examples using Ubuntu as the operating system:
Continuous Delivery for Consistent Quality
As Edge Manageability Framework evolves to to support new hardware variants, OS profiles, secure deployment modes, and orchestrator capabilities, the need for consistent, automated validation expands. Ensuring that each code change behaves reliably across this expanding surface area requires a structured continuous delivery system for testing. And part of that testing needs to focus on exercising and analyzing performance.
To address this, we built a unified, multi‑stage continuous delivery pipeline that validates Edge Manageability Framework from pull request to a full hardware-backed system. The pipeline provides fast feedback, enforces quality gates, and maintains consistency with KPIs that track and report on onboarding, provisioning, and other KPIs.
To get the full story on the continuous delivery pipeline and how it helps maintain quality, performance, and consistency, see Hardening EMF with Continuous Delivery: Our Multi‑Stage Pipeline.
Tools for Doing Some of Your Own Validation
You can use our Edge Workloads and Benchmarks to analyze the performance of video applications. Edge Workloads and Benchmarks are performance-optimized pipelines that use the GStreamer multimedia framework and the Deep Learning Streamer (DL Streamer) to validate media and edge AI video workloads. The pipelines measure end-to-end throughput in frames per second (fps), pipeline stream density in fps, package power in watts, and workload efficiency in fps per package watt.
Fast and Fluent Platforms for Edge AI
All this adds up to a fast and fluent edge AI framework that can cost-effectively deliver right-size performance for edge AI infrastructure and applications running on Intel silicon -- performance that will be extended with the availability of Intel® Core™ Ultra Series 3 processors. For more information about the performance of Intel® Core™ Ultra Series 3 processors, see Panther Lake by the Numbers.
Beta Was this translation helpful? Give feedback.
All reactions