Description
Dear Dr. Nubert,
First, thank you for your valuable and comprehensive work on Holistic Fusion (HF). I’m highly interested in your framework and would like to integrate it into my application.
I noticed that both the paper and code implementation primarily focus on LiDAR-based SLAM. However, my use case relies heavily on visual SLAM (e.g., monocular/stereo cameras). Could you kindly clarify:
1.Compatibility: Does HF natively support visual SLAM measurements (e.g., feature-based or direct methods)? If so, are there existing interfaces or examples for integrating visual odometry (e.g., ORB-SLAM, VINS-Fusion)?
2.Modifications Needed: Would adapting HF for vision-centric systems require significant changes to the factor graph formulation or sensor abstraction layer?
Your insights would greatly help me evaluate HF’s applicability to my project. The framework’s flexibility for multi-sensor fusion is impressive, and I’d love to contribute to its vision-oriented extensions if feasible.
Thank you for your time! Looking forward to your response.