A deep learning framework for research, development, and production with flexible Python API and C++ core.
Neural Network Libraries (NNabla) is a deep learning framework developed by Sony for research, development, and production use. It provides a flexible Python API built on a C++ core, supporting both static and dynamic computation graphs, and runs on various platforms from desktops to embedded devices. The framework aims to simplify model creation, training, and deployment while maintaining high performance and extensibility.
Researchers, engineers, and developers working on deep learning projects who need a flexible framework for prototyping, experimentation, and deploying models across diverse hardware environments.
NNabla offers a clean, extensible architecture that allows easy customization and addition of new modules, combined with efficient performance via CUDA acceleration and memory optimization. Its unified API for static and dynamic graphs provides flexibility uncommon in other frameworks.
Neural Network Libraries
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Supports both static and dynamic computation graphs with a unified API, enabling runtime network construction as shown in the stochastic layer example in the README.
Most library code is in C++14, making it portable to embedded devices, and it includes a C runtime for efficient inference deployment.
Easy to add new modules, operators, or backends; for instance, the CUDA extension provides GPU acceleration, and the contribution guide highlights simple customization.
Command-line utility nnabla_cli supports conversion between formats like ONNX, TensorFlow, and TFLite, facilitating interoperability with other ecosystems.
The README explicitly states NNabla is under maintenance with no active development, limiting future updates, bug fixes, and adaptation to new AI trends.
Only supports recent CUDA versions (e.g., 11.6), with no support for older versions like 10.x, 9.x, or 8.x, restricting use on legacy systems.
Compared to frameworks like PyTorch or TensorFlow, it has a smaller user base, resulting in fewer pre-trained models, tutorials, and third-party integrations.