Skip to content

Dev external occupancy map generation #23

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 29 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
ade8ec8
add parquet export example
jaybdub Mar 17, 2025
c421cd8
ensure replay uses originally recorded velocities
jaybdub Mar 18, 2025
ea95492
add world linear/angular velocity to robot state
jaybdub Mar 18, 2025
6fbed37
add parquet export example
jaybdub Mar 18, 2025
5b4ecf8
load parquet info
jaybdub Mar 18, 2025
b50f84a
fix segmentation overwrite issue
jaybdub Mar 25, 2025
2966cd4
update changelog and readme
jaybdub Mar 31, 2025
5842398
add occupancy map to cfg to support override
jaybdub Mar 31, 2025
7b200a9
add manual occ bounds to UI
jaybdub Mar 31, 2025
35c8251
move robot after occ map build
jaybdub Mar 31, 2025
ff83733
added custom method for getting world pose, with major speedup
jaybdub Apr 2, 2025
baa0d1b
make robot use get_world_pose() method
jaybdub Apr 2, 2025
c09cdae
fix type annotations
jaybdub Apr 2, 2025
8694182
add compression for path to reduce number of points for speed up
jaybdub Apr 2, 2025
3ffd751
use get_world_pose in get/set pose 2d methods to speed up
jaybdub Apr 2, 2025
cdfeb65
reset on goal reach for path planning
jaybdub Apr 2, 2025
16b76b7
add venv to gitignore
jaybdub Apr 3, 2025
4318aee
modify build to open stage, add more fficient replay directory. TODO:…
jaybdub Apr 3, 2025
f9a9acc
fix double use of count variable
jaybdub Apr 3, 2025
95ba0e5
make parquet conversion work with directory
jaybdub Apr 4, 2025
3c0071d
use manual occupancy map path instead
jaybdub Apr 8, 2025
500d0a8
do not save ground plane in scenario
jaybdub Apr 8, 2025
febf0de
add update state to reset
jaybdub Apr 8, 2025
65f5b4f
move scenario after robot
jaybdub Apr 8, 2025
1a670a5
add docs for manual occ map gen
jaybdub Apr 8, 2025
35e1d77
add occupancy map generation instructions
jaybdub Apr 8, 2025
40a39f8
add visualization
jaybdub Apr 8, 2025
1e57a1a
fix steps in readme
jaybdub Apr 10, 2025
9c01359
update changelog
jaybdub Apr 10, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,5 @@ _*/
/.vs

/app
data
data
/.venv
7 changes: 7 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,13 @@

# main

- Added instructions for manually building Occupancy Map with Isaac Sim Occupancy Map tool
- Modified extension to load pre-defined occupancy map rather than building on the fly
- User is now required to build the Occupancy Map before data collection.
- Added prim_get_world_transform to get world and local pose to address performance bottleneck with Isaac Sim method
- Added example for parquet conversion (to support X-Mobility training)
- Added robot linear and angular velocity to state (to support X-Mobility training)
- Fixed bug when replay rendering does not include segmentation info
- Added support for surface normals image in replay rendering
- Added support for instance ID segmentation rendering
- Added camera world pose to state
Expand Down
112 changes: 102 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,30 +163,120 @@ Below details a typical workflow for collecting data with MobilityGen.
./scripts/launch_sim.sh
```

### Step 2 - Build a scenario
### Step 2 - Load a stage
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"scene" seems more commonly used than "stage"?


This assumes you see the MobilityGen extension window.
To get started, we'll open an example warehouse stage.

1. Under Scene USD URL / Path copy and paste the following
1. Select ``File`` -> ``Open``

2. Enter the following URL under ``File name``

```
http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/Isaac/4.2/Isaac/Environments/Simple_Warehouse/warehouse_multiple_shelves.usd
```
3. Click ``Open File``

2. Under the ``Scenario`` dropdown select ``KeyboardTeleoperationScenario`` to start
> If you see a prompt ``Opening a Read Only File`` appear you can click ``Open Original File``

After a few seconds, you should see the stage appear.

### Step 3 - Create an occupancy map

Next, we need need to build an occupancy map for the environment.

To do this, we'll use the Occupancy Map tool provided with Isaac Sim

1. Select ``Tools`` -> ``Robotics`` -> ``Occupancy Map`` to open the Occupancy Map extension

> You may need to also click the ``Occupancy Map`` tab in the bottom window pane to see the extension window.

2. In the ``Occupancy Map`` window set ``Origin`` to

- ``X``: ``2.0``
- ``Y``: ``0.0``
- ``Z``: ``0.0``

3. In the ``Occupancy Map`` window set ``Upper Bound`` to

- ``X``: ``10.0``
- ``Y``: ``20.0``
- ``Z``: ``2.0`` (We'll assume the robot can move under 2 meter overpasses.)

4. In the ``Occupancy Map`` window set ``Lower Bound`` to

- ``X``: ``-14.0``
- ``Y``: ``-18.0``
- ``Z``: ``0.1`` (We'll assume we can move over 5cm bumps.)

5. Click ``Calculate`` to generate the Occupancy Map

7. Click ``Visualize Image`` to view the Occupancy Map

8. In this ``Visualization`` window under ``Rotate Image`` select ``180``

8. In this ``Visualization`` window under ``Coordinate Type`` select ``ROS Occupancy Map Parameters File YAML``

10. Click ``Regenerate Image``

12. Copy the YAML text generated to your clipboard

11. In a text editor of choice, create a new file named ``~/MobilityGenData/maps/warehouse_multiple_shelves/map.yaml``

> Note: ``~`` corresponds to your user's home directory. By default,
> we'll keep our data in ``~/MobilityGenData``

12. Paste the YAML text copied from the ``Visualization`` window into the created file.

13. Edit the line ``image: warehouse_multiple_shelves.png`` to read ``image: map.png``

14. Save the file.

11. Back in the ``Visualization`` window click ``Save Image``

12. In the tree explorer open the folder ``~/MobilityGenData/maps/warehouse_multiple_shelves``

12. Under file name enter ``map.png``

13. Click ``Save``

That's it! You should now have a folder ``~/MobilityGenData/maps/warehouse_multiple_shelves/`` with a file named
``map.yaml`` and ``map.png`` inside.

> Note: For more details on generating occupancy maps, check the documents [here](https://docs.omniverse.nvidia.com/isaacsim/latest/features/ext_omni_isaac_occupancy_map.html).
> However, please note, to work with MobilityGen, you must use the rotation and coodinate type specifications detailed above.

### Step 4 - Build a scenario

Now that we have a ROS format Occupancy Map of our environment, we're ready to use MobilityGen!

Perform the following steps in the ``MobilityGen`` extension window to build a new scenario.

1. Under ``Stage`` paste the following, corresponding to the environment USD we used in [Step 3](#step-3---create-an-occupancy-map)

```
http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/Isaac/4.2/Isaac/Environments/Simple_Warehouse/warehouse_multiple_shelves.usd
```

2. Under ``Occupancy Map`` enter the following corresponding to the Occupancy Map we created in [Step 3](#step-3---create-an-occupancy-map)

```
~/MobilityGenData/maps/warehouse_multiple_shelves/map.yaml
```

3. Under the ``Robot`` dropdown select ``H1Robot``

2. Under the ``Scenario`` dropdown select ``KeyboardTeleoperationScenario`` to start

4. Click ``Build``

After a few seconds, you should see the scene and occupancy map appear.

### Step 3 - Initialize / reset the scenario
### Step 5 - Initialize / reset the scenario

1. Click the ``Reset`` function to randomly initialize the scenario. Do this until the robot spawns inside the warehouse.
1. Click the ``Reset`` function to randomly initialize the scenario. Do this until the robot spawns in a desirable location.


### Step 4 - Test drive the robot
### Step 6 - Test drive the robot

Before you start recording, try moving the robot around to get a feel for it

Expand All @@ -197,7 +287,7 @@ To move the robot, use the following keys
- ``S`` - Move Backwards
- ``D`` - Turn right

### Step 5 - Start recording!
### Step 7 - Start recording!

Once you're comfortable, you can record a log.

Expand All @@ -209,7 +299,7 @@ Once you're comfortable, you can record a log.

The data is recorded to ``~/MobilityGenData/recordings`` by default.

### Step 6 - Render data
### Step 8 - Render data

If you've gotten this far, you've recorded a trajectory, but it doesn't include the rendered sensor data.

Expand All @@ -234,7 +324,7 @@ Rendering the sensor data is done offline. To do this call the following

That's it! Now the data with renderings should be stored in ``~/MobilityGenData/replays``.

### Step 7 - Visualize the Data
### Step 9 - Visualize the Data

We provide a few examples in the [examples](./examples) folder for working with the data.

Expand Down Expand Up @@ -399,6 +489,8 @@ The state_dict has the following schema
"robot.action": np.ndarray, # [2] - Linear, angular command velocity
"robot.position": np.ndarray, # [3] - XYZ
"robot.orientation": np.ndarray, # [4] - Quaternion
"robot.linear_velocity": np.ndarray, # [3] - The linear velocity in world frame (As retrieved by robot.get_linear_velocity() in isaac sim)
"robot.angular_velocity": np.ndarray, # [3] - The angular velocity of the robot in the world frame. (As retrieved by robot.get_angular_velocity() in isaac sim)
"robot.joint_positions": np.ndarray, # [J] - Joint positions
"robot.joint_velocities": np.ndarray, # [J] - Joint velocities
"robot.front_camera.left.rgb_image": np.ndarray, # [HxWx3], np.uint8 - RGB image
Expand Down
111 changes: 111 additions & 0 deletions examples/05_convert_to_parquet.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
import argparse
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why there is a "05" in the file title?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is given because this script is currently listed as an example (number 5). The purpose of the numbering is to provide developers with a rough progression to follow. I find this pattern helpful when I see it in other projects, like NVISII.

Maybe we should instead place this under "scripts", which is unordered, and meant more for direct usage, not modification

import pandas
from reader import Reader
import numpy as np
import tqdm
import PIL.Image
import io

import os
import subprocess
import glob
import argparse


def numpy_array_to_flattened_columns(key: str, value: np.ndarray):
columns = {
f"{key}": value.flatten()
}
# add shape if ndim > 1
if value.ndim > 1:
columns[f"{key}.shape"] = tuple(value.shape)
return columns


def numpy_array_to_jpg_columns(key: str, value: np.ndarray):
image = PIL.Image.fromarray(value)
buffer = io.BytesIO()
image.save(buffer, format="JPEG")
columns = {
key: buffer.getvalue()
}
return columns


if "MOBILITY_GEN_DATA" in os.environ:
DATA_DIR = os.environ['MOBILITY_GEN_DATA']
else:
DATA_DIR = os.path.expanduser("~/MobilityGenData")

if __name__ == "__main__":

parser = argparse.ArgumentParser()
parser.add_argument("--input_dir", type=str, default=None)
parser.add_argument("--output_dir", type=str, default=None)


args = parser.parse_args()

if args.input_dir is None:
args.input_dir = os.path.join(DATA_DIR, "replays")

if args.output_dir is None:
args.output_dir = os.path.join(DATA_DIR, "parquet")

if not os.path.exists(args.output_dir):
os.makedirs(args.output_dir)

input_recordings = glob.glob(os.path.join(args.input_dir, "*"))

processed_count = 0

for input_recording_path in input_recordings:
processed_count += 1
print(f"Processing {processed_count} / {len(input_recordings)}")

recording_name = os.path.basename(input_recording_path)
output_path = os.path.join(args.output_dir, recording_name + ".pqt")

reader = Reader(recording_path=input_recording_path)

index = 0


output: pandas.DataFrame = None

for index in tqdm.tqdm(range(len(reader))):

data_dict = {}

# Common data (saved as raw arrays)
state_common = reader.read_state_dict_common(index=index)
state_common.update(reader.read_state_dict_depth(index=index))
state_common.update(reader.read_state_dict_segmentation(index=index))
# state_common.update(reader.read_state_dict_depth(index=index))
# TODO: handle normals

for k, v in state_common.items():
if isinstance(v, np.ndarray):
columns = numpy_array_to_flattened_columns(k, v)
else:
columns = {k: v}
data_dict.update(columns)

# RGB data (saved as jpg)
state_rgb = reader.read_state_dict_rgb(index=index)
for k, v in state_rgb.items():
if isinstance(v, np.ndarray):
columns = numpy_array_to_jpg_columns(k, v)
else:
columns = {k: v}
data_dict.update(columns)


# use first frame to initialize
if output is None:
output = pandas.DataFrame(columns=data_dict.keys())

output.loc[index] = data_dict


output.to_parquet(output_path, engine="pyarrow")
19 changes: 19 additions & 0 deletions examples/06_load_parquet.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
import argparse
import pandas
import matplotlib.pyplot as plt
import numpy as np

parser = argparse.ArgumentParser()
parser.add_argument("parquet_path")
args = parser.parse_args()

data = pandas.read_parquet(args.parquet_path, engine="pyarrow")


print(data.columns)
vel = np.stack(data['robot.linear_velocity'].to_numpy())


plt.plot(vel[:, 0], 'r-')
plt.plot(vel[:, 1], 'r-')
plt.show()
28 changes: 28 additions & 0 deletions examples/PARQUET_DATA_FORMAT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Parquet Data Format

Below is a description of the fields in a typical MobilityGen recording, common to all scenarios.

| Field | Type | Shape | Description |
|-------|------|-------|-------------|
| robot.action | array | 2 | The command linear, angular velocity in the robot frame. |
| robot.position | array | 3 | The xyz position of the robot in the world frame. |
| robot.orientation | array | 4 | The quaternion of the robot in the world frame. |
| robot.joint_positions| array | N | The joint positions of the robot. |
| robot.joint_velocities | array | N | The joint velocities of the robot. |
| robot.linear_velocity | array | 3 | The linear velocity of the robot in the world frame. (Retrieved by robot.get_linear_velocity() in isaac sim) |
| robot.angular_velocity | array | 3 | The linear velocity of the robot in the world frame. (Retrieved by robot.get_angular_velocity() in isaac sim) |
| robot.front_camera.left.segmentation_info | dict | | The segmentation info dictionary as retrieved by the Isaac Sim replicator annotator. |
| robot.front_camera.left.segmentation_image | array | (HxW) flattened | The segmentation image as retrieved by the Isaac Sim replicator annotator. Flattened |
| robot.front_camera.left.segmentation_image.shape | tuple | 2 | The segmentation image shape. |
| robot.front_camera.left.rgb_image | bytes | | The RGB camera image compressed to JPG. |
| robot.front_camera.left.depth_image | array | (HxW) | The depth image (in meters) flattened into an array. |
| robot.front_camera.left.depth_image.shape | tuple | 2 | The shape of the depth image.

> Note, there are additional fields with similar semantics for other cameras we have excluded brevity.

Below are fields specific to the path following scenario

| Field | Type | Shape | Description |
|-------|------|-------|-------------|
| target_path | array | (Nx2) flattened | The target path generated by the path planner in world coordinates. This is updated whenever the path planner is called, which occurs when the robot reaches a goal (or at the beginning of a new recording) |
| target_path.shape | tuple | 2 | The shape of the target path array. |
Loading