A fast, flexible C++ standalone library for machine learning with high-performance defaults and total internal modifiability.
Flashlight is a fast, flexible machine learning library written entirely in C++ from Facebook AI Research. It provides a standalone framework for building and experimenting with neural networks, featuring high-performance tensor computation via ArrayFire and a small, modifiable codebase. It solves the need for an efficient, scalable, and extensible research tool in native C++.
Machine learning researchers and engineers who require high-performance, customizable C++ frameworks for experimenting with new algorithms and models, particularly in domains like speech recognition, computer vision, and language modeling.
Developers choose Flashlight for its total internal modifiability, small footprint, and high-performance defaults, enabling rapid iteration without sacrificing efficiency. Its native C++ design and integration with ArrayFire provide a powerful, unopinionated alternative to larger frameworks.
A C++ standalone library for machine learning
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Provides internal APIs for tensor computation, allowing researchers to deeply customize low-level operations without framework constraints, as emphasized in the README's core features.
The core library is under 10 MB and 20k lines of C++, making it lightweight and easier to audit or modify compared to larger frameworks.
Uses ArrayFire for just-in-time kernel compilation, ensuring optimized tensor operations and efficient computation for research workloads.
Includes ready-to-use apps for speech recognition, image classification, and language modeling, accelerating research in these areas without starting from scratch.
Requires managing numerous dependencies like ArrayFire, CUDA, MKL, and MPI, with intricate CMake build options that can be error-prone and time-consuming.
As a niche framework, it lacks the extensive pre-trained models, tutorials, and third-party tools available in larger ecosystems like PyTorch or TensorFlow.
Heavily reliant on ArrayFire for tensor operations, which may restrict flexibility if future needs outgrow or conflict with ArrayFire's capabilities.