A benchmark dataset for long-range (up to 250m) dense depth estimation in autonomous driving, featuring 360° LiDAR ground truth.
DDAD (Dense Depth for Autonomous Driving) is a benchmark dataset for training and evaluating monocular depth estimation models in autonomous driving contexts. It provides synchronized camera videos and highly accurate, long-range LiDAR-based depth ground truth across a 360° field of view. The dataset addresses the challenge of obtaining reliable depth perception in diverse and complex urban environments.
Researchers and engineers working on computer vision for autonomous vehicles, specifically those developing or benchmarking monocular depth estimation, 3D perception, and sensor fusion models.
DDAD offers unique long-range (up to 250m) depth ground truth with high precision, combined with 360° multi-camera coverage, making it a more challenging and realistic benchmark than existing datasets. Its cross-continental urban diversity and association with a public depth estimation challenge provide a standardized platform for advancing state-of-the-art perception.
Dense Depth for Autonomous Driving (DDAD) dataset.
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Uses Luminar-H2 LiDAR with sub-centimeter precision up to 250 meters, providing reliable ground truth for autonomous vehicle planning, as stated in the dataset details.
Includes six synchronized cameras and LiDARs for full panoramic coverage, enabling comprehensive scene analysis and sensor fusion, as described in the sensor placement section.
Captures data from multiple cities in the U.S. and Japan under various conditions, enhancing model robustness for real-world driving, per the dataset stats.
Applies face and license plate blurring using state-of-the-art detectors, ensuring compliance with privacy standards without compromising data utility.
The dataset is 257 GB for train+val, which can be prohibitive for researchers with limited resources, and downloading requires significant time and infrastructure.
Requires the TRI DGP codebase to load and process data, adding complexity and potential vendor lock-in, as highlighted in the 'How to Use' section.
Licensed under CC BY-NC-SA, which restricts commercial use without additional permissions, limiting its applicability for industry projects.