Skip to content
Closed
Show file tree
Hide file tree
Changes from 10 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion robotics-ai-suite/docs/rvc

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ RVC Framework is composed by

High level design:

.. image:: images/html/RVC.png
.. image:: ../images/html/RVC.png

.. toctree::
:maxdepth: 1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Control

.. _High Level Design:

.. image:: images/html/RVCControl.png
.. image:: ../../images/html/RVCControl.png
:alt: High Level Design

The above :ref:`High Level Design <High Level Design>` diagram shows in communication between
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ This approach uses a standard camera RGB steam together with the camera calibrat
With Computer Vision methods, the camera stream is scanned for given objects (by sample image).
A detection is in the form of a 2D bounding box with an additional angle defining the rotation of the object.

.. image:: ../../../images/html/rotatedBB.png
.. image:: ./../../../../images/html/rotatedBB.png


.. note::
Expand All @@ -31,7 +31,7 @@ Pose Projection
Additionally, this component is capable of projecting the finding in the flat RGB input image into a 3D object pose.
Therefor the algorithm projects the object found in the 2D image onto a plane at the given distance and assumes that the object from the image is located on this plane.

.. image:: ../../../images/html/PoseProjection.png
.. image:: ../../../../../images/html/PoseProjection.png

The above illustration shows the setup and how the camera, the plana and the object relate to each other.

Expand All @@ -40,7 +40,7 @@ This will be the input for the :ref:`Control<rvc_control>` components to define

Annotated image
^^^^^^^^^^^^^^^
.. image:: ../../../images/html/2.5DAnnotatedImage.png
.. image:: ../../../../../images/html/2.5DAnnotatedImage.png

The ``rotated_object_detection`` node can produce an annotated image as output. This is useful to visually inspect the detections.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ The component of this container are:

.. _vision_container_high_level_diagram:

.. image:: images/html/RVCVisionHighLevel.png
.. image:: ../../../images/html/RVCVisionHighLevel.png
:alt: Vision container high level diagram


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
Stationary Robot Vision & Control
######################################

.. image:: images/html/robotic-arm-graphic.png
.. image:: ../images/html/robotic-arm-graphic.png

Robotics Pick and Place in Industrial Fields
============================================
Expand Down Expand Up @@ -54,7 +54,7 @@ Robot Vision and Control aims at tackling the problematics and offers a flexible
Robot Vision and Control is a robotic software framework aimed at tackling Pick and place, Track
and place industrial problems.

.. image:: images/html/RobotBackground.png
.. image:: ../images/html/RobotBackground.png


Stationary Robot Vision & Control Resources
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -150,13 +150,13 @@ the two, here is the conversion:



.. image:: images/html/convertWaypoint.png
.. image:: ../../../images/html/convertWaypoint.png
:alt: UR External Control

1. Assure that the drop down ``Feature`` is set to ``base``
2. Assure that the TCP offset takes in account how far the gripper picking position is (in this case our gripper closed fingertips is at 17.5 cm from End effector of UR5e)

.. image:: images/html/TCPOffset.png
.. image:: ../../../images/html/TCPOffset.png
:alt: UR External Control


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ Configure the URCaps for the robot and the Robotiq 2F-85 URCap. For details, ref

After installing `external_control.urcap`, the screen, shown in the following figure, will be displayed.

.. image:: images/html/URExternalControl.png
.. image:: ../../../images/html/URExternalControl.png
:alt: UR External Control

.. note::
Expand All @@ -142,7 +142,7 @@ Install these URCaps on the UR5e robot teach pendant using a USB drive.

Restart the robot. Select **Program Robot** on the Welcome screen. Go to the **Installation** tab. Select **Gripper** listed under **URCaps**.

.. image:: images/html/URRobotiqGripper.png
.. image:: ../../../images/html/URRobotiqGripper.png
:alt: UR Robotiq Gripper urcap


Expand Down Expand Up @@ -199,7 +199,7 @@ Create Program

To use the new URCaps, enabling the communication with the Intel® architecture RVC controller, create a new program on the teaching pendant and insert the **External Control** program node in the program tree.

.. image:: images/html//URCreateProgram.png
.. image:: ../../../images/html//URCreateProgram.png
:alt: Create Program

.. note::
Expand Down Expand Up @@ -237,5 +237,5 @@ correctly homed before initiating any automated behavior.
| Wrist 3 | 0° |
+----------+---------------+

.. image:: images/html/sethomeposition.png
.. image:: ../../../images/html/sethomeposition.png
:alt: setting home position
Original file line number Diff line number Diff line change
Expand Up @@ -49,18 +49,18 @@ Here the step by step procedure:
- Create the stl file to 3D print the object via `FreeCAD <https://www.freecad.org/>`_ or similar
- Import the stl file via `Blender <https://www.blender.org/>`_

.. image:: images/html/importSTL.png
.. image:: ../../../images/html/importSTL.png
:alt: Import STL Blender menu


- Edit so the metrics matches the Realsense Camera metrics: Units are in meters AND the center of the object is in the origin of blender and parallel to the axes where applicable. In short, perform scaling, rotating and translating operations so that dimension matches the realsense camera and the rototranslation from blender origin matches the desired outcome. For example, looking at the following image, the imported STL has been scaled down so the side of the cube is 5CM (0.05 meters), and translated down the Z axis of 0.025 centimeters, so the center of the cube is at 0,0,0. No rotation was needed as the cube was already parallel to the absolute reference system.

.. image:: images/html/editObject.png
.. image:: ../../../images/html/editObject.png
:alt: Transform object by scaling, rotating and translating with Blender

- Export the object in WaveFront format (.obj) as shown in picture

.. image:: images/html/exportToObj.png
.. image:: ../../../images/html/exportToObj.png
:alt: Blender menu to export selected object to WaveFront format


Expand All @@ -69,7 +69,7 @@ Here the step by step procedure:
Note: Important consideration: The RVC Pose Detector will align this object PCD file to the input cloud from realsense. This means calculating how much every points of the object pcd are translated and rotated on top of the realsense poincloud from the original file location. To have a consistent meaning, the object baricenter should be in the origin to simulate the center of the optical camera (where all the optical and depth information are translated to). in this way, the algorithm will determine how far and how rotated is the object from the camera optical lense. if the object is not centered in 0,0,0, this calculation would be wrong. See following picture:


.. image:: images/html/CenteredObject.png
.. image:: ../../../images/html/CenteredObject.png
:alt: Vertices of a 0,0,0 centered object


Expand All @@ -91,7 +91,7 @@ Verify that the PCD file has enough points using the pcl_viewer tool which comes
As show in following image


.. image:: images/html/pcl_viewer.png
.. image:: ../../../images/html/pcl_viewer.png
:alt: PCD visualizer

rvc_use_case_binaries package creation
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Rviz2 Plugin

A rviz2 plugin has been implemented to give full control of the use case in the same HMI:

.. image:: images/html/RvizDynamicUseCase1.png
.. image:: ../../../images/html/RvizDynamicUseCase1.png
:alt: RViz2 Control Panel Custom plugin

- Enable/Disable motion button
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ and orientation in space with a 2.5D algorithm and the robot picks it up accordi

The only configuration needed on the robot, is to put the teaching pendant in ``remote control`` as show in following picture

.. image:: images/html/setremotecontrol.png
.. image:: ../../images/html/setremotecontrol.png
:alt: Setting pendant to Remote control


Expand Down
44 changes: 0 additions & 44 deletions robotics-ai-suite/robot-vision-control/docs/Makefile

This file was deleted.

77 changes: 1 addition & 76 deletions robotics-ai-suite/robot-vision-control/docs/README.md
Original file line number Diff line number Diff line change
@@ -1,78 +1,3 @@
# Robotic Vision & Control [RVC] Documentation

This directory contains the **Robotic Vision & Control [RVC]** system documentation, which is built from source using [Sphinx](https://www.sphinx-doc.org/). The following instructions will guide you through setting up the environment, installing dependencies, and building the HTML documentation.

---

## 1. Install System Dependencies

Before setting up the Python environment, ensure that essential system packages are installed. These packages include:

* `python3-pip` – for installing Python packages
* `graphviz` – for rendering diagrams in Sphinx
* `libenchant-2-dev` – required by spelling check extensions


```bash
sudo apt update
sudo apt install python3-pip
sudo apt install graphviz libenchant-2-dev
```

---

## 2. Set Up a Python Virtual Environment

Though not necessary, it is recommended to use a virtual environment to keep dependencies isolated.

```bash
# Navigate to the documentation folder
export DOCS_DIR=<path to edge-ai-suites folder>
cd $DOCS_DIR/edge-ai-suites/robotics-ai-suite/robot-vision-control/docs

# Create a new virtual environment
python3 -m venv venv_robotics-ai-suite-docs

# Activate the virtual environment
source venv_robotics-ai-suite-docs/bin/activate
```

---

## 3. Upgrade pip, setuptools, and wheel

Inside the virtual environment, upgrade core Python packaging tools. This ensures compatibility with modern packages.

```bash
pip install --upgrade pip setuptools wheel
```

---

## 4. Install Python Dependencies

With the virtual environment active, install all Python dependencies required to build the documentation:

```bash
pip install -r requirements.txt
```

Now reactivate the virtual environment:

```bash
source venv_robotics-ai-suite-docs/bin/activate
```

---

## 5. Build HTML Documentation

Once dependencies are installed and the virtual environment is active, generate the HTML version of the documentation:

```bash
make html
```

The output will be available in the `build/html` folder inside your `docs` directory. You can open the `index.html` file in a browser to view the documentation.

---
The documentation files are at https://github.com/open-edge-platform/edge-ai-suites/tree/main/robotics-ai-suite/docs/rvc.
10 changes: 0 additions & 10 deletions robotics-ai-suite/robot-vision-control/docs/requirements.txt

This file was deleted.

This file was deleted.

This file was deleted.

Loading