A direct sparse odometry library for real-time monocular visual SLAM, estimating camera motion from image sequences.
DSO (Direct Sparse Odometry) is a monocular visual odometry library that estimates camera motion from image sequences in real-time. It uses a direct, sparse optimization approach, minimizing photometric error over selected pixels to track camera pose without relying on feature matching. The method is designed for high accuracy with proper geometric and photometric calibration.
Researchers and developers in robotics, computer vision, and augmented reality who need real-time camera tracking from monocular video. It's particularly suited for those working on visual SLAM, drone navigation, or 3D reconstruction systems.
DSO offers high-precision odometry through direct sparse optimization, which is more accurate than feature-based methods under good calibration. Its open-source implementation provides a robust foundation for custom visual odometry pipelines, with extensible I/O wrappers for integration into various systems.
Direct Sparse Odometry
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Minimizes error directly on pixel intensities, avoiding feature matching pitfalls and providing higher accuracy under good calibration, as emphasized in the DSO paper.
Uses a sparse set of points for optimization, enabling real-time performance on standard hardware with preset modes for speed/accuracy trade-offs.
Incorporates lens vignetting and non-linear response functions, improving robustness to lighting variations, as demonstrated with TUM monoVO datasets.
Provides wrapper interfaces for custom image input, visualization, and data output, allowing easy integration into projects like ROS without modifying core logic.
Accuracy degrades sharply with poor geometric calibration; the README notes that 1.5 pixel distortions reduce accuracy by a factor of 10, demanding meticulous setup.
The built-in initializer is slow and unreliable, requiring slow, 'nice' camera motion during startup, which limits real-world deployment out-of-the-box.
As a pure visual odometry without relocalization or loop closure, tracking loss is permanent, making it unsuitable for long-term or dynamic environments.
Requires multiple libraries like OpenCV, Pangolin, and suitesparse, with optional dependencies for full functionality, complicating setup and maintenance.