A fast Support Vector Machine (SVM) library that leverages GPUs and multi-core CPUs for high-performance machine learning.
ThunderSVM is a fast, open-source library for training and deploying Support Vector Machine (SVM) models. It accelerates SVM computations by leveraging the parallel processing capabilities of GPUs and multi-core CPUs, solving the performance limitations that often make SVMs impractical for large-scale datasets.
Data scientists, machine learning engineers, and researchers who need to train SVM models on large datasets and require significant speed improvements over traditional CPU-based libraries like LibSVM.
Developers choose ThunderSVM for its substantial performance gains—often orders of magnitude faster—while maintaining full compatibility with the widely-used LibSVM API, ensuring a smooth transition and minimal code changes.
ThunderSVM: A Fast SVM Library on GPUs and CPUs
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Supports all LibSVM features and uses identical command-line options, allowing seamless migration from existing LibSVM workflows without code changes.
Provides native Python, R, Matlab, and Ruby interfaces, enabling integration into diverse data science environments beyond just Python.
Exploits CUDA-enabled GPUs to drastically reduce SVM training times, making it practical for large datasets where traditional implementations are too slow.
Offers a familiar scikit-learn interface in Python, facilitating easy use within standard machine learning pipelines and reducing the learning curve.
Requires manual compilation with cmake and specific compilers (e.g., gcc 4.8+ or Visual C++), which can be error-prone compared to simple pip installations for pure Python libraries.
Optimal performance hinges on CUDA-enabled NVIDIA GPUs; the CPU version exists but offers less dramatic speedups, excluding users with other hardware or in GPU-less environments.
The README lists specific wheel files for only certain CUDA versions and OSes (e.g., CUDA 9.0 for Linux, CUDA 10.0 for Windows), forcing users with other setups to compile from source, increasing setup time.