A PyTorch framework for semantic segmentation of large 3D point clouds using superpoint graphs.
Superpoint Graphs (SPG) is a PyTorch-based framework for semantic segmentation of large 3D point clouds. It solves the problem of efficiently labeling each point in massive point clouds (e.g., from LiDAR scans) by first grouping points into superpoints based on geometric features, then using graph neural networks to classify these superpoints. This approach reduces computational complexity while capturing contextual relationships between regions.
Researchers and developers working on 3D computer vision, autonomous driving, robotics, or geospatial analysis who need to segment large-scale point cloud data like urban scenes or indoor environments.
SPG offers a structured graph-based approach that combines efficient geometric partitioning with deep learning, enabling accurate segmentation of massive point clouds where raw point-wise methods are computationally prohibitive. Its flexibility with datasets and optional PyTorch Geometric integration makes it adaptable for various applications.
Large-scale Point Cloud Semantic Segmentation with Superpoint Graphs
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Groups points into geometrically homogeneous superpoints to reduce complexity, enabling segmentation of massive urban LiDAR scans as demonstrated in the CVPR2018 paper.
Supports both handcrafted geometric features and deep metric learning for superpoint generation, allowing users to choose based on data characteristics and research goals.
Optional use provides stable and fast graph convolutions, enhancing reliability and performance in graph-based learning, as noted in the README.
Includes utilities to visualize input, ground truth, partitions, results, and superedge structures with configurable output types, aiding in debugging and analysis.
The repository is explicitly marked as no longer maintained, with authors recommending SuperPoint Transformer for better performance, reducing long-term viability.
Installation requires compiling C++ libraries, managing specific versions of Boost and Eigen, and handling submodules, leading to a error-prone and time-consuming process.
Features for datasets like Semantic3D and ScanNet are listed as 'to come soon' or unavailable, limiting immediate use without custom adaptations.
The partition method is inherently stochastic, causing slight variations in results even with trained weights, as admitted in the disclaimer.