A ROS package for extrinsic calibration between LiDAR and camera sensors using 3D-3D point correspondences.
lidar_camera_calibration is a ROS package that computes the extrinsic calibration (rigid-body transformation) between a LiDAR sensor and a camera. It solves the critical problem of aligning 3D point cloud data with 2D image frames, which is necessary for accurate sensor fusion in robotics, autonomous vehicles, and 3D perception systems. The method uses 3D-3D point correspondences obtained from ArUco marker boards placed in the environment.
Robotics engineers and researchers working on autonomous systems, sensor fusion, or perception stacks who need precise alignment between LiDAR and camera data. It's particularly useful for teams deploying multi-sensor platforms in real-world environments.
Developers choose this package because it provides a proven, practical calibration method with support for popular LiDAR hardware (Hesai, Velodyne) and camera types. Its iterative averaging and point cloud fusion capabilities offer robustness and verification tools not always found in basic calibration solutions.
ROS package to find a rigid-body transformation between a LiDAR and a camera for "LiDAR-Camera Calibration using 3D-3D Point correspondences"
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Works with Hesai and Velodyne LiDARs along with monocular and stereo cameras, as explicitly listed in the Key Features and configuration sections.
Runs multiple iterations (MAX_ITERS) and averages rotation and translation results, improving robustness as described in the config_file.txt explanation.
Includes tools to fuse point clouds from multiple extrinsically calibrated cameras, demonstrated in the fusion videos for accuracy validation and intuitive result checking.
Allows filtering LiDAR points by spatial bounds and intensity thresholds in config_file.txt, making it easier to isolate and mark board edges during calibration.
Requires manual placement of ArUco marker boards and interactive marking of line segments in point clouds, making it labor-intensive and unsuitable for automated workflows.
Currently only supports Hesai and Velodyne LiDARs, with integration for other manufacturers listed as a future feature in the Contributing section, restricting compatibility.
Demands precise setup of multiple configuration files (e.g., config_file.txt, marker_coordinates.txt) with specific parameters, which can be error-prone and time-consuming for new users.