This repository contains the implementation of a LiDAR-Visual-Inertial SLAM system designed to run on NVIDIA Jetson Orin NX. The system integrates LiDAR, visual, and inertial sensors using a tightly coupled optimization framework for real-time odometry and mapping. The implementation is based on existing open-source code, please refer to the acknowledgement section.
The project is implemented in ROS 2 Humble, leveraging CUDA acceleration for image processing and is shipped with a Docker image to streamline development.
- Tightly Coupled Sensor Fusion: Leveraging the strengths of LiDAR, visual, and inertial sensors to enhance state estimation.
- Enhanced Feature Tracking: Combining visual and LiDAR data to reliably track features even in low-texture environments.
- Efficient Initialization: Rapid initialization achieved by integrating visual Structure-from-Motion (SfM) with IMU preintegration.
- Global Consistency: Employing pose graph optimization to manage drift through frequent loop closure detection.
Refer to the Quick-Start Guide for step-by-step instructions on building and running the system using Docker.
The system consists of two parallel subsystems that exchange information to enhance state estimation:
- Extracts and tracks image features from the camera.
- Performs IMU preintegration to refine motion estimation.
- Uses LiDAR-derived depth information to improve accuracy.
- Maintains a sliding window optimization for local consistency.
- Contributes to loop closure detection using visual feature matching.
- Performs IMU preintegration for an initial motion estimation.
- Uses IMU preintegration to deskew LiDAR scans.
- Performs scan matching against a global map to compute odometry.
- Maintains a factor graph optimization to refine trajectory estimates.
- Detects loop closures based on LiDAR feature matching.
- Uses VIS odometry estimates to improve initial pose estimation.
The two subsystems operate independently but share key data:
- VIS → LIS:
- Sends visual odometry estimates to LIS as an initial guess for scan-matching.
- Contributes to loop closure detection, helping LIS correct drift.
- LIS → VIS:
- Provides LiDAR depth information to VIS, improving feature triangulation.
- Sends pose corrections when loop closures refine the global trajectory.
This tightly coupled architecture ensures robust tracking, reducing drift and improving localization accuracy, even in feature-poor environments.
The system has been developed and tested with the following hardware:
- LiDAR: Livox MID360
- Camera: Arducam IMX219
- IMU: Integrated Livox 6-axis IMU
- Processing Unit: NVIDIA Jetson Orin NX
- Carrier Board: Seeedstudio A603
While it is possible to run the system on different hardware, modifications may be required.