Robotics AI Suite Races Forward on Panther Lake and Bartlett Lake #2469
stevenhoenisch
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
As edge workloads rapidly evolve to include AI models and robotics capabilities for sensing conditions, making sense of those conditions, and acting in some way on that data, edge applications -- and their underlying processors -- must remain predictable and performant at scale.
With its integration with Panther Lake and Bartlett Lake processors, the Robotics AI Suite and its various frameworks, toolkits, and libraries let you select the right building blocks to compose a unified stack with the compute performance and integrated AI acceleration that fulfills your robotics use cases, integrates with existing workflows, and fits within existing form factors, environments, and power envelopes.
The suite's reference applications, for instance, show how you can apply capabilities like perception, locomotion, manipulation, and imitation learning. There are Vision Language Models optimized to accelerate AI inference with video and text inputs to provide context, and streaming media analytics pipelines can gather inputs from sensors and multiple cameras for spatial intelligence. Advanced AI algorithms provide object detection and motion and task planning.
Workflows and Capabilities for Reference Pipelines
More specifically, the Robotics AI Suite contains the following collections that group workflows and capabilities for the robot reference pipelines in release 2026.0 of the Robotics AI Suite:
Autonomous Mobile Robots (AMRs) to navigate, wander, and operate independently in dynamic environments like warehouses and factories in order to, for example, recognize text on boxes and route them to the right place. See a description of how it works.
Humanoid Imitation Learning sample pipeline to perform interactive or assistive tasks with increasing efficiency while avoiding physical obstacles.
Stationary Robot Vision and Control to enable fixed-position robots to use vision systems for tasks like inspection, assembly, or quality control.
Each of these collections provides the following resources or capabilities, which take various forms, from software development kits (SDKs) to microservices, all curated to help independent software vendors (ISVs), system integrators, and robotics developers build better, smarter robotics solutions faster:
Libraries for core robotics workloads and control recipes like Real-Time Linux OS and EtherCAT master stack.
Integration with ROS 2, supported sensor profiles, and benchmarking tools.
Hardware acceleration on Intel® CPUs, integrated GPUs, and NPUs for faster inference.
OpenVINO™-optimized models for computer vision, large language models (LLMs), and vision-language-action (VLA), aspects of which are shown in the LLM Robotics Demo:
The OpenVINO™ toolkit, for example, empowers you to optimize and scale AI workloads across CPUs, GPUs, and NPUs to maximize performance, portability, and predictability. OpenVINO supports more 900 models, including both traditional computer vision models and the newer generative AI models propelling VLM and VLA into robotics applications.
In addition, software from the Edge AI Libraries complements the OpenVINO and the robotics software stack: Geti™ software for model training and data annotation, Intel® SceneScape for spatial intelligence and cross-camera tracking, DL Streamer for video analytics pipelines, and Anomalib for manufacturing anomaly detection.
Plan and Plot Your Robotics Path Faster
The Robotics AI Suite also includes plugin-ready accelerated libraries for critical robotics workloads:
FastMapping: A ROS project to construct and maintain a volumetric map from a moving RGB-D camera.
ITS Planner, for Intelligent Sampling and Two-Way Search Path Planner, is a global path planner module for ROS2 Navigation based on Intelligent Sampling and Two-Way Search (ITS). This plugin is designed for efficient path planning using either Probabilistic Road Map (PRM) or Deterministic Road Map (DRM) approaches.
ADBSCAN, for Adaptive Density-Based Spatial Clustering of Applications with Noise, is an advanced unsupervised clustering algorithm that groups high-dimensional points based on their distribution density. It is an improvement over the classic DBSCAN algorithm where the clustering parameters are adaptive based on the range, making it especially suitable for processing LiDAR data. ADBSCAN is an Intel-patented algorithm that improves the object detection range by 20-30% on average.
All these capabilities come together to form an optimized platform for evaluating, adapting, and scaling robotics solutions in a way that minimizes risk while helping you to codify your own performance and efficiency requirements into a repeatable formula and map those requirements to CPUs, GPUs, and NPUs.
Running a Real-Time Control System and AI-Perception on a Single SoC
The advanced capabilities of the Robotics AI Suite from Intel let you take your own advanced AI capabilities and use them to rapidly develop cost-effective yet performant robotics using Panther Lake or Bartlett Lake for both of the two main parts of a robotics system:
The real-time control system that moves joints and ensures safe, precise timing.
The vision and AI-based perception that processes information about the robot's physical environment and makes decisions about how to move around in it.
The latest processors from Intel can typically handle both the real-time controls capability and the AI-based perception capability in one processor, making it simplier, faster, and cheaper to build with a single processor. Other advances from Intel such as Time Coordinated Computing (TCC) further optimize performance by prioritizing access to cache, memory, and networking resources to support high availability for real-time robotics systems across distributed edge deployments.
The Stationary Robot Vision and Control collection in the Robotics AI Suite includes capabilities to help make this efficiency gain happen, such as Intel Lab’s Histodepth Pointcloud Segmentation algorithm in a Virtual Fence application. This virtual fence application running on ROS uses depth information from an Intel RealSense camera to create dynamic and static scene segmentation maps for live robotic virtual fencing and safety bounding. The result? A drop-in approach to virtual fencing that requires no training or learning before deployment.
Indeed, as robotics systems evolve with Vision Language Models (VLMs) and Vision Language Action Models (VLAs), AI-powered robotics can meld visual perception with contextual reasoning. Such models can analyze and react to not only objects but scenes. The models can process what's taking place in a video stream, for instance, with enough contextual awareness to generate control actions for robotics in the unpredictable environment of such real-world situations as the industrial edge -- and with the low levels of latency that robotics applications require, sometimes including sub-millisecond response times for control loops.
The Dynamic Vision Use Case in the Robotics AI Suite is a case in point. It demonstrates how a robot actively tracks a group of objects in real-time as they unpredictably navigate through three-dimensional space. The robot picks up the objects and accurately places them where they belong.
Optimizing Robotics Performance at Cost with Intel® Core™ Processors
The diverse choices and the range of the combinations of the Intel® Core™ Series 2 processors and the Intel® Core™ Ultra Series 3 processors make it possible to optimize performance for a range of real-world uses cases that demand cost-effective precision and performance with different form factors to match various robotics use cases.
The Intel® Core™ Series 2 processor with P-cores for the edge strengthens the deterministic, real-time foundation that industrial edge deployments depend on. With up to 12 P-cores, up to 1.5x higher multi-threaded performance over the prior generation, and 10-year availability with long-term OS support, this is the platform that keeps factories and automated systems running with consistency.
Intel® Time Coordinated Computing (TCC) and Time Sensitive Networking (TSN) technologies deliver deterministic execution that is essential for industrial control, robotics, and automation. Intel® Core™ Series 2 processors deliver up to 2.5x more deterministic scheduling behavior and up to 3.8x better predictable performance under load versus competitive alternatives.
When you need precision and AI acceleration, the Intel® Core™ Ultra Series 3 processor for the edge brings the same real-time and deterministic capabilities with integrated AI acceleration for almost 180 TOPS in a single SoC. This CPU-NPU-GPU combination provides power efficiency for inferencing and high-throughput performance for AI and video analytics to make VLM and VLA workloads cost effective at the edge.
Benchmarking Tools to Evaluate Performance
But with robotics, especially at the edge, other workloads beyond vision models and vision processing carry requirements -- and those combined demands make the combination of flexibility and interoperability key factors. Different use cases, environments, data types, model architectures, existing systems, and integration requirements place complex demands on putting an AI-driven robotics system into production. For each unique workload, you must be able to minimize risk and complexity while maximizing scale and streamlining operations. And that's where the benchmarking tools of the Robotics AI Suite come into action:
VTune™ Profiler is a performance analysis tool for applications and systems. It helps analyze and optimized system performance and configuration. You can execute the profiler on a CPU, GPU, or FPGA. It can profile both single-threaded as well as multi-threaded applications.
ROS2 KPI Monitoring tool lets you monitor, analyze, and visualize the following Key Performance Indicators in ROS2 systems: node latencies, CPU and memory usage, message flow, and thread-level resource distribution.
Partnering to Accelerate Physical AI with Intel
The Robotics AI Suite forms part of an open ecosystem from Intel that strives to cultivate broad robotics partnerships with ODMs, OEMs, and ISVs to support robotics on x86. Global hardware and software partner integrations provide alternatives for deploying robotics solutions that deliver long-term affordability and scalability, and a legacy of expertise to build robotics on.
Partner contributions to the Robotics AI Suite include integrations for multi-camera and sensor inputs that enable precise perception and object detection. Software extends functionality for the ROS 2 framework to adapt locomotion. Hardware builders partnering with Intel integrate best-known configurations for AI systems qualified on Intel silicon to power the next generation of robotics.
Find Out More
The Robotics AI Suite is updated with quarterly releases on GitHub to give you a foundational toolkit with a predictable, performant, and scalable path for developing robotics solutions. Reusing tested code for core robotics functions such as perception or motion planning shortens development cycles, and the suite's open solutions ease integration and minimize complexity and risk. To find out more, see the following resources:
Release Notes: What’s New in the 2026.0 release of the Robotics AI Suite.
User guide, tutorials, and documentation for the 2026.0 release.
The Robotics AI Suite on Builders.Intel.com and on GitHub.
Notices & Disclaimers
Performance varies by use, configuration, and other factors. Learn more on the Performance Index site. See https://edc.intel.com/content/www/us/en/products/performance/benchmarks/. Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. No product or component can be absolutely secure. Your costs and results may vary. Intel technologies may require enabled hardware, software or service activation.
Beta Was this translation helpful? Give feedback.
All reactions