Transform3D, Points3D, GeoPoints, Pinhole, EncodedImage, Scalars
This example demonstrates how to visualize and work with MCAP files in Rerun. From mcap.dev:
MCAP (pronounced "em-cap") is an open source container file format for multimodal log data. It supports multiple channels of timestamped pre-serialized data, and is ideal for use in pub/sub or robotics applications.
MCAP is the default bag format in ROS 2 and is rapidly gaining adoption. You can read more about Rerun's MCAP support here.
In this guide, you will learn:
- How to load MCAP files directly into the Rerun viewer.
- How to convert MCAP files into native Rerun data files (RRD).
- How to convert older ROS bags (ROS 1 and ROS 2 SQLite3) into MCAP.
- How to read and deserialize MCAP/RRD data in Python for programmatic processing and advanced visualization in Rerun.
We will use a dataset from the JKK Research Center containing LiDAR, images, GPS, and IMU data. The workflow involves converting the original ROS 1 bag → MCAP → RRD, and then using Python to log the RRD data with specific Rerun components for optimal visualization.
Below you will find a collection of useful Rerun resources for this example:
- Blog post introducing MCAP support
- MCAP
- Logging API
- Dataframe
- Blueprints
In the table below, the mapping between some ROS messages types and some Rerun archetypes is presented. In a RRD file, you may find the fields of a Rerun archetype spread across multiple columns. So you could have, e.g., a Points3D:positions column in the RRD file. This would then correspond to the positions field of the Points3D archetype.
| ROS message | Rerun archetype | Rerun fields |
|---|---|---|
| geometry_msgs/msg/PoseStamped | InstancePoses3D | translations, quaternions |
| geometry_msgs/msg/TransformStamped | Transform3D | child_frame, parent_frame, quaternion, translation |
| frame_id | CoordinateFrame | frame |
| sensor_msgs/msg/CameraInfo | Pinhole | child_frame, image_from_camera, parent_frame, resolution |
| sensor_msgs/msg/CompressedImage | EncodedImage | blob |
| sensor_msgs/msg/Imu | Scalars | scalars |
| sensor_msgs/msg/NavSatFix | GeoPoints | positions |
| sensor_msgs/msg/PointCloud2 | Points3D | ambient, intensity, positions, range, reflectivity, ring, t |
Rerun supports visualizing MCAP files in several ways:
You can drag-and-drop MCAP files directly into the Rerun viewer or use the File > Open menu option.
Load the file directly when starting the Rerun viewer:
rerun recording.mcapYou can also load an MCAP file from code, for example in Python you initialize Rerun and load the file path:
import rerun as rr
rr.init("rerun_example_mcap/load_mcap", spawn=True)
rr.log_file_from_path("recording.mcap")If you have older ROS (1) bags or ROS 2 SQLite3 (.db) bags, you can convert them to MCAP using available libraries.
Convert ROS 2 SQLite3 (.db) bags using the Rosbag2 ros2 bag convert command:
ros2 bag convert -i ros2_bag -o out.yamlwhere out.yaml may contain something like:
output_bags:
- uri: ros2_mcap_bag
storage_id: mcap
all_topics: true
all_services: trueRosbags is a pure Python solution for converting, reading, and writing ROS 1 and ROS 2 bags without requiring a full ROS installation. To install:
pip install rosbagsUsing the CLI, you can convert a ROS (1) or ROS 2 SQLite3 (.db) bag to MCAP by running:
# For ROS (1) bag
rosbags-convert --dst-storage mcap --src input.bag --dst output
# For ROS 2 SQLite3 bag
rosbags-convert --dst-storage mcap --src input --dst outputYou can run:
rosbags-convert --helpto see all the options.
The MCAP command line tool also does not require a ROS installation:
# For ROS(1) bag
mcap convert input.bag output.mcap
# For ROS 2 SQLite3 bag
mcap convert input.db3 output.mcapConverting an MCAP file to RRD enables more Rerun capabilities and provides faster loading. Use the CLI mcap convert command:
rerun mcap convert input.mcap -o output.rrdYou can also convert specific layers of the MCAP file:
# Use only specific layers
rerun mcap convert input.mcap -l stats -o output.rrd
# Use multiple layers for different perspectives
rerun mcap convert input.mcap -l ros2msg -l raw -l recording_info -o output.rrdRead more about MCAP layers here.
You can see the options for the mcap convert command by running:
rerun mcap convert --helpFor advanced processing and visualization with Rerun, we recommend converting the MCAP file to an RRD file first.
During the conversion from MCAP to RRD, Rerun automatically interprets common ROS messages and converts them into native Rerun types. This allows you to use Rerun's data-loaders to easily retrieve structures like Points3D and EncodedImage instead of processing raw binary blobs.
Tips: You can use the rerun rrd stats [PATH_TO_INPUT_RRDS] to see what entities and components are in a RRD file.
For details on accessing data from the original bag format, you can refer to the documentation for Rosbag2, Rosbags, or the MCAP library itself.
This example demonstrates the full ROS 1 Bag -> MCAP -> RRD -> Rerun visualization workflow.
First you will need to clone this repo:
git clone https://github.com/rerun-io/mcap_example.git
cd mcap_exampleWe assume you are running all commands within the mcap_example repository folder and using the Pixi package manager for environment setup.
Download the ROS 1 bag file (leaf-2022-03-18-gyor.bag) from the JKK Research Center:
wget https://laesze-my.sharepoint.com/:u:/g/personal/herno_o365_sze_hu/EVlk6YgDtj9BrzIE8djt-rwBZ47q9NwcbgxU_zOuBji9IA?download=1 -O leaf-2022-03-18-gyor.bagThis dataset includes camera images, LiDAR point clouds, poses, IMU, and GPS.
Use Rosbags's rosbags-convert tool within the Pixi environment to convert the ROS 1 bag to MCAP:
pixi run rosbags-convert --dst-storage mcap --src leaf-2022-03-18-gyor.bag --dst leaf-2022-03-18-gyorNOTE: pixi run in the above command means that we are running the rosbags-convert command inside the Pixi environment where we have installed the Rosbags library.
This creates a folder leaf-2022-03-18-gyor containing the leaf-2022-03-18-gyor.mcap file. You can now visualize it:
pixi run rerun leaf-2022-03-18-gyor/leaf-2022-03-18-gyor.mcapor drag the file into Rerun.
Next, convert the MCAP file into a native Rerun file (RRD):
pixi run rerun mcap convert leaf-2022-03-18-gyor/leaf-2022-03-18-gyor.mcap -o leaf-2022-03-18-gyor.rrdTry opening the RRD file directly:
pixi run rerun leaf-2022-03-18-gyor.rrdor drag the file into Rerun.
While Rerun can present data directly, programmatic access is necessary for complex processing or to apply a custom visualization blueprint. We will access the converted RRD file using Rerun's dataframe and data-loading capabilities.
Key Concept: Rerun converts common ROS messages (like
sensor_msgs/msg/NavSatFixorsensor_msgs/msg/PointCloud2) into native Rerun types (GeoPointsorPoints3D) during the MCAP -> RRD conversion. This is why we read the RRD file in the following steps.
The main logic resides in the log_dataset function (in mcap_example/__main__.py), which loads the RRD archive and processes different sensor messages:
with rr.server.Server(datasets={'dataset': [path_to_rrd]}) as server:
dataset = server.client().get_dataset('dataset')
log_gps(dataset)
log_imu(dataset)
# … and so on for all entitiesThe GPS coordinates are logged as GeoPoints. In the original bag file, the GPS data was stored as the ROS message sensor_msgs/msg/NavSatFix. Rerun has native support for this ROS message, meaning when the bag file was converted to a RRD file, the messages were converted to Rerun's GeoPoints type. Also, since this message contains a Header, the ros2_timestamp timeline will be populated with data from this header. This makes it easy to log them:
entity = '/gps/duro/fix'
positions_col = f'{entity}:GeoPoints:positions'
timeline = 'ros2_timestamp'
df = dataset.filter_contents([entity]).reader(index=timeline)
timestamps = rr.TimeColumn('time', timestamp=pa.table(
df.select(timeline))[timeline].to_numpy())
positions = pa.table(df.select(col(positions_col)[0]))[0].to_pylist()
rr.send_columns(
'world/ego_fix_rot/ego_vehicle/gps',
indexes=[timestamps],
columns=rr.GeoPoints.columns(
positions=positions,
radii=[rr.Radius.ui_points(10.0)] * len(positions),
),
)First we specify the entity we want, in this case it is the ROS topic /gps/duro/fix. We only want the GeoPoints:positions component so we specify that (i.e., f'{entity}:GeoPoints:positions'). You could also read the MCAP components, however, that is more difficult.
Next, we specify the ros2_timestamp timeline. This timeline was populated by the ROS timestamp that was in the Header of the sensor_msgs/msg/NavSatFix messages, when the MCAP was converted to RRD.
Since everything had been converted already to the format we wanted, we can simply log the data using the send_columns function.
The IMU data are logged as Scalars. Similar to the GPS data, Rerun also has support for the IMU ROS message type, sensor_msgs/msg/Imu. This makes it easy to log this data as well:
entity = '/gps/duro/imu'
scalars_col = f'{entity}:Scalars:scalars'
timeline = 'ros2_timestamp'
df = dataset.filter_contents([entity]).reader(index=timeline)
timestamps = rr.TimeColumn('time', timestamp=pa.table(
df.select(timeline))[timeline].to_numpy())
data = np.vstack(pa.table(df.select(scalars_col))[0].to_numpy())
angular_velocity = data[:, :3]
linear_acceleration = data[:, 3:6]
rr.send_columns(
'world/ego_fix_rot/ego_vehicle/imu/angular_velocity',
indexes=[timestamps],
columns=rr.Scalars.columns(
scalars=angular_velocity,
),
)
rr.send_columns(
'world/ego_fix_rot/ego_vehicle/imu/linear_acceleration',
indexes=[timestamps],
columns=rr.Scalars.columns(
scalars=linear_acceleration,
),
)The code is pretty much the same as for the GPS data.
The speed data is also logged as Scalars. However, this time it is not as straight forward. This is because the speed is recorded as a std_msgs/msg/Float32, which will not be converted to a Rerun type. Furthermore, the ROS message does not contain a Header, meaning the ros2_timestamp also will not be populated. This message is, however, simple in that it only contains a single Float32 datafield. To log the speed data we do:
entity = '/vehicle_speed_kmph'
float32_msg_col = f'{entity}:std_msgs.msg.Float32:message'
timeline = 'message_publish_time'
df = dataset.filter_contents([entity]).reader(index=timeline)
timestamps = rr.TimeColumn('time', timestamp=pa.table(
df.select(timeline))[timeline].to_numpy())
data = pa.table(df.select(col(float32_msg_col)[0]))[0].to_pylist()
speeds = [msg['data'] for msg in data]
rr.send_columns(
'world/ego_fix_rot/ego_vehicle/speed',
indexes=[timestamps],
columns=rr.Scalars.columns(
scalars=speeds,
),
)As you can see, we read the ROS std_msgs/msg/Float32 message component and select the message_publish_time timeline instead of ros2_timestamp. You might instead want to use the message_log_time timeline. We also need to extract the Float32 data from each message, this we do on the line [msg[0]['data'] for msg in data].
The pose data is logged as Transform3D. The pose data comes in the form of geometry_msgs/msg/PoseStamped messages, a message type that Rerun converts to InstancePoses3D. Below you can see how to extract the data from this message type.
entity = '/current_pose'
quaternions_col = f'{entity}:InstancePoses3D:quaternions'
translations_col = f'{entity}:InstancePoses3D:translations'
timeline = 'ros2_timestamp'
df = dataset.filter_contents([entity]).reader(index=timeline)
timestamps = rr.TimeColumn('time', timestamp=pa.table(
df.select(timeline))[timeline].to_numpy())
table = pa.table(df.select(col(quaternions_col)[0], col(translations_col)[0]))
quaternions = table[0].to_pylist()
translations = table[1].to_pylist()
rr.send_columns(
'world/ego_fix_rot',
indexes=[timestamps],
columns=rr.Transform3D.columns(
translation=translations,
),
)
rr.send_columns(
'world/ego_fix_rot/ego_vehicle',
indexes=[timestamps],
columns=rr.Transform3D.columns(
quaternion=quaternions,
),
)We separate the translation from rotation for a fixed-orientation top-down view in the Rerun viewer, as you will see later. We also subtract all the positions with the initial position, to start the run from the origin.
Pinhole cameras and sensor poses are initialized to offer a 3D view and camera perspective. This is achieved using the Pinhole and Transform3D archetypes. The camera intrinsics are stored as a sensor_msgs/msg/CameraInfo message in ROS. Rerun will automatically convert this into the Pinhole Rerun type, so we can query for that. As the camera intrinsics does not change, we only read the first message and statically log the information.
rr.log(
'world/ego_fix_rot/ego_vehicle/camera',
rr.Transform3D(
translation=[0, 0, 1],
relation=rr.TransformRelation.ParentFromChild,
),
static=True,
)
entity = '/zed_node/left/camera_info'
image_from_camera_col = f'{entity}:Pinhole:image_from_camera'
resolution_col = f'{entity}:Pinhole:resolution'
timeline = 'ros2_timestamp'
df = dataset.filter_contents([entity]).reader(index=timeline)
table = pa.table(df.select(col(image_from_camera_col)[0], col(resolution_col)[0]))
image_from_camera = table[0][0].as_py()
resolution = table[1][0].as_py()
rr.log(
'world/ego_fix_rot/ego_vehicle/camera/image',
rr.Pinhole(
image_from_camera=image_from_camera,
resolution=resolution,
camera_xyz=rr.components.ViewCoordinates.FLU,
image_plane_distance=1.5,
),
static=True
)For simplicity, we set the camera 1 meter above the center of the ego vehicle in this case. As an exercise you can try reading the data from the ROS /tf topic to apply the correct transformations.
Camera data is logged as encoded images using EncodedImage. Rerun support both sensor_msgs/msg/Image and sensor_msgs/msg/CompressedImage (the one encountered in this dataset), making it easy to read images from RRD files into Python.
entity = '/zed_node/left/image_rect_color/compressed'
blob_col = f'{entity}:EncodedImage:blob'
timeline = 'ros2_timestamp'
df = dataset.filter_contents([entity]).reader(index=timeline)
timestamps = rr.TimeColumn('time', timestamp=pa.table(
df.select(timeline))[timeline].to_numpy())
images = pa.table(df.select(blob_col))[blob_col].to_numpy()
images = np.concatenate(images).tolist()
rr.send_columns(
'world/ego_fix_rot/ego_vehicle/camera/image',
indexes=[timestamps],
columns=rr.EncodedImage.columns(
blob=images,
),
)If you want to perform some processing, for example segmentation or classification, using the images, you can easily convert them into PIL or OpenCV images and go from there.
The sensor_msgs/msg/PointCloud2 is supported and converted to the Points3D archetype. Here, we iterate through batches and calculate the minimum/maximum height to apply color coding based on the Z-coordinate:
entity = '/left_os1/os1_cloud_node/points'
positions_col = f'{entity}:Points3D:positions'
timeline = 'ros2_timestamp'
df = dataset.filter_contents([entity]).reader(index=timeline)
for stream in df.select(timeline, positions_col).repartition(10).execute_stream_partitioned():
for batch in stream:
pa = batch.to_pyarrow()
for i in range(pa.num_rows):
positions = np.array(pa[positions_col][i].as_py())
min_z = np.min(positions[:, 2])
max_z = np.max(positions[:, 2])
colors = (positions[:, 2] - min_z) / (max_z - min_z)
colors = mpl.colormaps['turbo'](colors)[:, :3]
rr.set_time('time', timestamp=pa[timeline][i])
rr.log(
'world/ego_fix_rot/ego_vehicle/points',
rr.Points3D(
positions=positions,
colors=colors,
radii=rr.Radius.ui_points(2.0),
),
)The default blueprint is configured to provide an optimal view of the data:
- 3D View: Set with
origin='/world/ego_fix_rot/ego_vehicle'to follow the vehicle from a third-person perspective. - Top Down 3D View: Set with
origin='/world/ego_fix_rot'(the entity path with only translation) to provide a fixed, map-aligned bird's-eye view, regardless of the vehicle's yaw. - Time Series Views: For plotting IMU and speed.
- Map View: For visualizing GPS.
rrb.Blueprint(
rrb.Horizontal(
rrb.Vertical(
rrb.Spatial3DView(
name='3D View',
origin='/world/ego_fix_rot/ego_vehicle',
contents=["+ /**"],
defaults=[
rr.Pinhole.from_fields(
image_plane_distance=1.0,
color=[0, 0, 0, 0],
line_width=0.0,
),
],
eye_controls=rrb.EyeControls3D(
position=(-5, 0, 2.5),
look_target=(0.0, 0.0, 2),
eye_up=(0.0, 0.0, 1.0),
spin_speed=0.0,
kind=rrb.Eye3DKind.FirstPerson,
speed=20.0,
)
),
rrb.Spatial3DView(
name='Top Down 3D View',
origin='/world/ego_fix_rot',
contents=["+ /**"],
defaults=[
rr.Pinhole.from_fields(
image_plane_distance=1.0,
color=[0, 0, 0, 0],
line_width=0.0,
),
],
eye_controls=rrb.EyeControls3D(
position=(0, 0, 60),
look_target=(-.18, 0.93, -0.07),
eye_up=(0.0, 0.0, 1.0),
spin_speed=0.0,
kind=rrb.Eye3DKind.FirstPerson,
speed=20.0,
),
),
),
rrb.Vertical(
rrb.TextDocumentView(
name='Description',
contents='description',
),
rrb.Horizontal(
rrb.TimeSeriesView(
name='Angular Velocity View',
contents='world/ego_fix_rot/ego_vehicle/imu/angular_velocity',
axis_x=rrb.archetypes.TimeAxis(
view_range=rr.TimeRange(
start=rr.TimeRangeBoundary.cursor_relative(
seconds=-3),
end=rr.TimeRangeBoundary.cursor_relative(
seconds=3)
)
),
),
rrb.TimeSeriesView(
name='Linear Acceleration View',
contents='world/ego_fix_rot/ego_vehicle/imu/linear_acceleration',
axis_x=rrb.archetypes.TimeAxis(
view_range=rr.TimeRange(
start=rr.TimeRangeBoundary.cursor_relative(
seconds=-3),
end=rr.TimeRangeBoundary.cursor_relative(
seconds=3)
)
),
)
),
rrb.Horizontal(
rrb.MapView(
name='Map View',
zoom=18,
),
rrb.TimeSeriesView(
name='Speed View',
contents='world/ego_fix_rot/ego_vehicle/speed',
),
),
row_shares=[0.2, 0.4, 0.4],
),
),
rrb.TimePanel(
state=rrb.components.PanelState.Collapsed,
play_state=rrb.components.PlayState.Playing,
loop_mode=rrb.components.LoopMode.All,
),
collapse_panels=True,
)To run this example, make sure you have the Pixi package manager installed.
pixi run exampleYou can type:
pixi run example -hto see all available commands. For example, if you placed the RRD file in a different location, you want to provide the --root-dir option, and if you renamed the file, you will want to provide the --dataset-file option.
Here are some tips for how to work with an RRD file. The video goes over these in more detail.
You can see the entities and components you are interested in using the stats tool:
rerun rrd stats [PATH_TO_INPUT_RRDS]If you are unsure what components are available for an entity, a script is provided, mcap_example/rrd_info.py, in this repository. To run it:
pixi run rrd_info [PATH_TO_INPUT_RRDS ...]So for the RRD file we created in this example you would run:
pixi run rrd_info leaf-2022-03-18-gyor.rrdand you will see output in the form of:
/ENTITY_1:
├─ COMPONENT_1: [FIELDS...]
...
└─ COMPONENT_N: [FIELDS...]
...
/ENTITY_M:
├─ COMPONENT_1: [FIELDS...]
...
└─ COMPONENT_L: [FIELDS...]
Here follows an example for how you can open the Rerun EncodedImage:blob component as a PIL image. This is useful if you, for example, want to process the images in some fashion, maybe for segmentation or classification.
import rerun as rr
from datafusion import col
import pyarrow as pa
from PIL import Image
import numpy as np
import io
path_to_rrd = 'leaf-2022-03-18-gyor.rrd'
with rr.server.Server(datasets={'dataset': [path_to_rrd]}) as server:
dataset = server.client().get_dataset('dataset')
entity = '/zed_node/left/image_rect_color/compressed'
blob_col = f'{entity}:EncodedImage:blob'
timeline = 'ros2_timestamp'
df = dataset.filter_contents([entity]).reader(index=timeline)
image = pa.table(df.select(blob_col))[blob_col][0]
image_data = np.array(image, np.uint8)
image = Image.open(io.BytesIO(image_data))
image.show()The core of this example is demonstrating how Rerun can efficiently handle complex robotics data logs. Rerun can be used to interact with MCAP files having various payloads through its layered data-loader design. For selected types, e.g., some ROS 2 messages, Rerun already does an automatic semantic conversion to archetypes that can be visualized. For message types that are not directly supported or if more in-depth customization is required, the conversion to RRD allows to select layers and transform the data with custom code.
The log_dataset function showcases Rerun's flexibility by mixing automatically converted data with manually processed data.
For supported messages like sensor_msgs/msg/NavSatFix (GPS) and sensor_msgs/msg/PointCloud2 (LiDAR), you query for the Rerun archetype components directly. This simplifies the Python code significantly, turning a complex deserialization and conversion task into a simple dataframe selection.
For messages not automatically converted (like std_msgs/msg/Float32), you query the raw ROS message payload and apply custom logic:
- Speed (Float32): Since there is no
Header, theros2_timestampis unavailable. We must fall back to themessage_publish_time(ormessage_log_time) timeline and manually unwrap the float value from the message structure (msg[0]['data']). - LiDAR Coloring: Even though the point cloud is automatically converted to
Points3D, the example adds custom logic to calculate the minimum and maximum Z-height across the entire point cloud and use a color map (liketurbo) to color the points based on their elevation. This improves the clarity of the 3D visualization.
# Example: Coloring based on Z height
min_z = np.min(points[:, 2])
max_z = np.max(points[:, 2])
colors = (points[:, 2] - min_z) / (max_z - min_z)
colors = mpl.colormaps['turbo'](colors)[:, :3]