A target-less, automatic toolbox for LiDAR-camera extrinsic calibration that works with various sensor models without requiring calibration targets.
Direct Visual LiDAR Calibration is a toolbox for extrinsic calibration between LiDAR and camera sensors. It automatically aligns 3D point clouds with 2D images without requiring calibration targets, using environmental features instead. The method supports various LiDAR and camera models and can work with minimal data.
Robotics engineers, autonomous vehicle developers, and researchers working on sensor fusion, 3D perception, or multi-modal sensor systems.
It eliminates the need for calibration targets and manual intervention while supporting diverse sensor configurations, making it more practical and accessible than traditional calibration methods.
A toolbox for target-less LiDAR-camera calibration [ROS1/ROS2]
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Handles various LiDAR types (spinning and non-repetitive scan) and camera models (pinhole, fisheye, omnidirectional), making it versatile for different sensor setups.
Uses environmental features instead of specialized targets, eliminating manual setup and making it practical for field deployments where targets are impractical.
Can calibrate with a single LiDAR-camera pair, reducing data collection effort, with optional multi-pair support for improved accuracy.
Operates without an initial guess, automating calibration and reducing manual intervention, as highlighted in the README.
Requires installation of ROS, PCL, OpenCV, GTSAM, Ceres, and other libraries, which can be challenging and time-consuming for users without prior experience.
Designed specifically for ROS, limiting use to ROS-based projects and lacking a standalone version for broader integration.
Pixel-level direct registration algorithms may be computationally intensive, slowing down processing with high-resolution data or in real-time scenarios.