A Java deep learning framework implementing neural networks with GPU acceleration via OpenCL and Aparapi.
NeuralNetworks is a Java deep learning framework that implements various neural network architectures and training algorithms with GPU acceleration via OpenCL and Aparapi. It provides tools for building, training, and experimenting with deep neural networks, focusing on modularity and extensibility to support custom network designs.
Java developers and researchers interested in deep learning who want a flexible, extensible framework for experimenting with neural network architectures and training algorithms, especially those requiring GPU acceleration.
Developers choose NeuralNetworks for its pure Java implementation with built-in GPU support, modular architecture that allows custom network topologies, and comprehensive set of pre-implemented networks and training algorithms, making it suitable for both experimentation and production use in Java environments.
java deep learning algorithms and deep neural networks with gpu acceleration
Leverages OpenCL and Aparapi for high-performance computation, with most ConnectionCalculator implementations optimized for GPU execution, as detailed in the library structure section.
Uses a tiered, graph-based design with LayerCalculator and ConnectionCalculator, allowing easy construction of complex topologies like directed acyclic graphs for custom networks.
Supports multilayer perceptrons, convolutional networks, RBMs, autoencoders, and deep belief networks, providing a comprehensive set for experimentation, as listed in the neural network types section.
Includes sigmoid, tanh, ReLU, LRN, softplus, and softmax with GPU support, and allows custom implementations, enhancing flexibility for researchers.
The README admits some tests are not working in the current version, and the user interface project is unfinished, indicating potential reliability and maintenance issues.
Requires downloading specific .dll or .so files for Aparapi, setting up OpenCL, and configuring system PATH, which can be cumbersome and error-prone, as noted in the build instructions.
GPU execution with Aparapi imposes limitations like only one-dimensional arrays and primitive types, requiring data conversion and reducing code flexibility, as explained in the GPU section.
An Open Source Machine Learning Framework for Everyone
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Making large AI models cheaper, faster and more accessible
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.