Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Model Tutorials

The OpenVINO™ toolkit supports most TensorFlow and PyTorch models. The following table lists deep-learning models commonly used in the Embodied Intelligence solutions, and information on how to run them on Intel® platforms:

| Algorithm | Description | Link |
|---------------|----------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------|
| YOLOv8 | CNN-based object detection | https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/yolov8-optimization |
| YOLOv12 | CNN-based object detection | https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/yolov12-optimization |
| MobileNetV2 | CNN-based object detection | https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/mobilenet-v2-1.0-224 |
| SAM | Transformer-based segmentation | https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/segment-anything |
| SAM2 | Extend SAM to video segmentation and object tracking with cross attention to memory | https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/sam2-image-segmentation |
| FastSAM | Lightweight substitute to SAM | https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/fast-segment-anything |
| MobileSAM | Lightweight substitute to SAM (Same model architecture as SAM. See OpenVINO SAM tutorials for model export and application | https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/segment-anything |
| U-NET | CNN-based segmentation and diffusion model | https://community.intel.com/t5/Blogs/Products-and-Solutions/Healthcare/Optimizing-Brain-Tumor-Segmentation-BTS-U-Net-model-using-Intel/post/1399037?wapkw=U-Net |
| DETR | Transformer-based object detection | https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/detr-resnet50 |
| GroundingDino | Transformer-based object detection | https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/grounded-segment-anything |
| CLIP | Transformer-based image classification | https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/clip-zero-shot-image-classification |
| Qwen2.5VL | Multimodal large language model | https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/qwen2.5-vl |
| Whisper | Automatic speech recognition | https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/whisper-asr-genai |
| FunASR | Automatic speech recognition | See the FunASR Setup ****funasr-setup**** in LLM Robotics sample pipeline |


> **Attention:**
When following these tutorials for model conversion, ensure that the OpenVINO toolkit version used for model conversion is the same as the runtime version used for inference. Otherwise, unexpected errors may occur, especially if the model is converted using a newer version and the runtime is an older version. See details in the [Troubleshooting](../troubleshooting.rst) section.

Please also find information for the models of imitation learning, grasp generation, simultaneous localization and mapping (SLAM) and bird's-eye view (BEV):

> **Note:**
Before using these models, read the [AI Content Disclaimer](../../legal.rst).

:::{toctree}
:maxdepth: 1

model_tutorials/model_act
model_tutorials/model_cns
model_tutorials/model_dp
model_tutorials/model_idp3
model_tutorials/model_superpoint
model_tutorials/model_lightglue
model_tutorials/model_fastbev
model_tutorials/model_depthanythingv2
model_tutorials/model_rdt
:::

This file was deleted.

25 changes: 11 additions & 14 deletions robotics-ai-suite/docs/embodied/fragment_packages-jammy.rst
Original file line number Diff line number Diff line change
@@ -1,17 +1,21 @@

Packages
########

.. list-table::
:widths: 20 40 50
:header-rows: 1

* - Component Group
- Package
- Description
* - :ref:`LinuxBSP <linuxbsp>`
* - :ref:`Linux Board Support Package (BSP) <linuxbsp>`
- | linux-intel-rt-experimental
| linux-intel-experimental
- Intel's Linux LTS real-time kernel (preempt-rt) and generic kernel, kernel version is 6.12.8
- Linux LTS real-time kernel (preempt-rt) optimized for Intel® platforms, version 6.12 and generic kernel, version 6.12.8
* - `Linux Runtime Optimization <https://eci.intel.com/docs/3.3/appendix.html#eci-kernel-boot-optimizations>`__
- customizations-grub
- Linux ECI and Intel's GRUB Customization
- Linux environment for Edge Controls for Industrial (ECI) and Intel-customized GRUB
* - `Linux firmware <https://eci.intel.com/docs/3.3/development/tutorials/enable-graphics.html>`__
- linux-firmware
- Linux firmware with Ultra iGPU firmware
Expand All @@ -23,7 +27,7 @@
| ighethercat-dpdk-examples
| ecat-enablekit
| ecat-enablekit-dpdk
- Optimized open source IgH EtherCAT Master Stack, it supports on kernel space and user space
- Optimized open-source IgH EtherCAT Master Stack for kernel space or user space
* - `Motion Control Gateway <https://eci.intel.com/docs/3.3/development/tutorials/enable-ros2-motion-ctrl-gw.html>`__
- | rt-data-agent
| ros-humble-agvm
Expand Down Expand Up @@ -51,13 +55,13 @@
| ros-humble-jaka-moveit-py
| ros-humble-run-jaka-moveit
| ros-humble-run-jaka-plc
- The Industrial Motion-Control ROS2 Gateway is the communication bridge between DDS/RSTP wire-protocol ROS2 implementation and Motion Control (MC) IEC-61131-3 standard Intel implementation
- The Industrial Motion-Control ROS2 Gateway is the communication bridge between the DDS and RSTP wire-protocol ROS2 implementation, and the Motion Control (MC) IEC-61131-3 standard Intel implementation.
* - :doc:`VSLAM: ORB-SLAM3 <sample_pipelines/ORB_VSLAM>`
- | libpangolin
| liborb-slam3
| liborb-slam3-dev
| orb-slam3
- Visual SLAM demo pipeline based on ORB-SLAM3. Refer to :doc:`VSLAM: ORB-SLAM3 <sample_pipelines/ORB_VSLAM>` for installation and launching tutorials.
- Visual SLAM demo pipeline based on ORB-SLAM3. See :doc:`VSLAM: ORB-SLAM3 <sample_pipelines/ORB_VSLAM>` for installation and launch tutorials.
* - `RealSense Camera <https://wiki.ros.org/RealSense>`__
- | librealsense2
| librealsense2-dev
Expand All @@ -69,7 +73,7 @@
| ros-humble-realsense2-camera
| ros-humble-realsense2-camera-msgs
| ros-humble-realsense2-description
- RealSense Camera driver and tools.
- RealSense camera's driver and tools.
* - :doc:`Imitation Learning - ACT <sample_pipelines/imitation_learning_act>`
- | act-ov
- Action Chunking with Transformers (ACT), a method that trains a generative model to understand and predict action sequences.
Expand All @@ -83,10 +87,3 @@
* - :doc:`Robotics Diffusion Transformer (RDT) <sample_pipelines/robotics_diffusion_transformer>`
- | rdt-ov
- Robotics Diffusion Transformer (RDT), the largest bimanual manipulation foundation model with strong generalizability.

.. toctree::
:maxdepth: 1
:hidden:

packages/linuxbsp
packages/mc_gateway
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
# Install Intel® RealSense™ SDK

Intel® RealSense™ SDK 2.0 is a cross-platform library for Intel® RealSense™ depth cameras.
The SDK allows depth and color streaming, and provides intrinsic and extrinsic calibration information.
The library also offers synthetic streams (pointcloud, depth aligned to color and vise-versa), and a built-in
support for record and playback of streaming sessions.

Intel® RealSense™ SDK 2.0 supports Robot Operating System (ROS) and ROS 2, allowing you to access commonly used robotic functionality with ease.

ROS is a set of open-source software libraries and tools that help you build robot applications. For more information, see https://www.ros.org/.

## Installation

1. Register the server's public key:

```bash

sudo mkdir -p /etc/apt/keyrings
curl -sSf https://librealsense.intel.com/Debian/librealsense.pgp | sudo tee /etc/apt/keyrings/librealsense.pgp > /dev/null

2. Ensure that apt HTTPS support is installed:

```bash

sudo apt-get install apt-transport-https

3. Add the server to the list of repositories:

```bash

echo "deb [signed-by=/etc/apt/keyrings/librealsense.pgp] https://librealsense.intel.com/Debian/apt-repo `lsb_release -cs` main" | \
sudo tee /etc/apt/sources.list.d/librealsense.list

4. Update your apt repository caches after setting up the repositories:

```bash

sudo apt update

5. Install the RealSense drivers and libraries:

```bash
sudo apt install librealsense2-dkms
sudo apt install librealsense2=2.55.1-0~realsense.12474

> **Attention:**
The command above has fixed the ``librealsense2`` package version; therefore, you need to install dependent packages, for example ``librealsense2-utils``, ``librealsense2-dev``, and ``librealsense2-gl``.

6. (Optional) Install the ROS wrappers for Intel RealSense depth cameras:

::::{tab-set}
:::{tab-item} **Jazzy**
:sync: tab1

```bash
sudo apt install ros-jazzy-realsense2-camera
```

:::
:::{tab-item} **Humble**
:sync: tab2

```bash
sudo apt install ros-humble-realsense2-camera
```

:::
::::

#. (Optional) Install other tools or packages of Intel RealSense depth cameras:

See `the installation link <https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md>`_
to install librealsense packages and more other tools from Intel® RealSense™ depth camera sources.

## Troubleshooting

### Errors for ``librealsense2-dkms`` installation

If errors occur during ``librealsense2-dkms`` package installation, you have options to fix it:

- Install librealsense SDK by using original Linux drivers.

Once errors occur during installing ``librealsenses2-dkms`` package, it probably is introduced by unmatched kernel version. You can try the following workaround:

```bash

sudo rm -rf /var/lib/dpkg/info/librealsense2-dkms*
sudo apt install librealsense2-dkms
```

- If the option above doesn't work , try to build and install from the source code.

Follow the steps in `the link <https://github.com/IntelRealSense/librealsense/blob/development/doc/installation.md>`_ to download the librealsense source code and build it.

### Errors for unmet dependencies

If you encounter unmet dependencies during the installation of ROS wrappers for Intel RealSense depth cameras, for example:

```shell

The following packages have unmet dependencies:
ros-humble-librealsense2-tools : Depends: ros-humble-librealsense2 (= 2.55.1-1eci9) but 2.55.1-1jammy.20241125.233100 is to be installed
E: Unable to correct problems, you have held broken packages.
```

This issue is probably caused by the mismatched versions of the ROS wrapper and the librealsense2 package.
You can try to fix it by specifying the version of the dependent package. You can try the following for the example given above:

::::{tab-set}
:::{tab-item} **Jazzy**
:sync: tab1

```bash
sudo apt install ros-jazzy-librealsense2=2.55.1-1eci9
sudo apt install ros-jazzy-librealsense2-tools=2.55.1-1eci9
```

:::
:::{tab-item} **Humble**
:sync: tab2

```bash
sudo apt install ros-humble-librealsense2=2.55.1-1eci9
sudo apt install ros-humble-librealsense2-tools=2.55.1-1eci9
```

:::
::::
Loading
Loading