Open Edge Platform Optimized for Running Edge AI on Panther Lake #10
stevenhoenisch
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Open Edge Platform and nearly all its infrastructure, libraries, suites, microservices, and applications have been tested and tuned as fully functional and optimized for performance on the Panther Lake architecture, an AI PC platform built on the advanced semiconductor process known as 18A. With Open Edge Platform now tested, tuned, and optimized for Intel® Core™ Ultra series 3 processors, code-named Panther Lake, the Open Edge Platform AI suites, libraries, tools, and frameworks deliver a unified silicon, platform, and software stack with cost-effective performance for AI workloads at scale.
This matters. Here's why:
The Intel Robotics AI software suite enables you to take your own advanced AI capabilities and use them to rapidly develop cost-effective robots using Panther Lake for both controls and AI-based perception.
Intel 18A is the first 2-nanometer class node developed and manufactured in the United States, delivering up to 15% better performance per watt and 30% improved chip density compared to Intel 35 -- cost-effective performance that you can tap and benchmark for your edge AI workloads. See Panther Lake by the Numbers.
Panther Lake delivers KPIs that matter for edge designs: The compute performance and integrated AI acceleration along with edge capabilities and reliability fit within existing form factors, environments, and power envelopes.
You can use Open Edge Platform to benchmark the performance of your own edge AI workloads on Intel® Core™ Ultra series 3 processors.
Edge AI Suites, Edge AI Libraries, Edge Manageability Framework, Edge Microvisor Toolkit, and OS Image Composer are now fully functional and optimized for performance on Panther Lake. The validated suites and libraries include such key areas as the Manufacturing AI Suite, Metro AI Suite, and video search and summarization with generative AI.
🎢 And you can use Open Edge Platform to validate performance for yourself with, for example, the loitering detection application that's part of the Metro AI Suite.
The loitering detection app uses AI algorithms to monitor and analyze real-time video feeds in an area like the back of a retail store. With pre-trained deep learning models, the app processes video streams in real-time and analyzes them to detect loitering with insightful data visualization. The architecture facilitates seamless integration and operation of various components involved in AI-driven video analytics, and the prebuilt scripts and configuration files simplify configuration and operation.
Once you set up the loitering detection app, you can use it to run performance benchmarks for the vision AI applications using the benchmarking scripts that are provided with the app. The script determines the maximum number of concurrent video streams a system can process (stream density) while maintaining a target performance level. The core of the benchmarking process is the
benchmark_start.shscript, located in the metro-vision-ai-app-recipe directory. This script automates the process of starting video streams, monitoring their performance (Frames Per Second FPS), and calculating key performance indicators (KPIs) to find the maximum sustainable stream density.For complete instructions, see benchmarking performance with the loitering detection app in the Metro AI Suite and find out how Panther Lake satisfies your own numbers for cost-effective edge AI performance.
Beta Was this translation helpful? Give feedback.
All reactions