GPU-accelerated neural network library for JavaScript, running in browsers and Node.js.
Brain.js is a GPU-accelerated neural network library written in JavaScript for both browsers and Node.js. It allows developers to create, train, and deploy machine learning models directly in JavaScript environments, supporting various network architectures like feedforward, RNN, LSTM, and GRU networks. The library simplifies implementing ML without needing Python or external servers.
JavaScript developers and data scientists who want to integrate machine learning into web applications, Node.js backends, or educational projects without leaving the JavaScript ecosystem.
It offers a pure JavaScript solution with GPU acceleration for performance, extensive neural network type support, and easy model serialization, making it ideal for real-time browser-based ML and seamless full-stack JavaScript development.
🤖 GPU accelerated Neural networks in JavaScript for Browsers and Node.js
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Uses WebGL via gpu.js for faster training and inference with CPU fallback, explicitly mentioned in the README's key features for performance boosts in browsers and Node.js.
Supports feedforward, RNN, LSTM, GRU, and autoencoders, enabling diverse tasks like time-series forecasting, as detailed in the neural network types section.
Runs consistently in browsers and Node.js with the same API, allowing seamless ML integration across full-stack JavaScript applications without external servers.
Allows exporting trained models to JSON or standalone functions via toJSON() and toFunction(), simplifying deployment and sharing as highlighted in the JSON and standalone function sections.
GPU support relies on headless-gl, which requires system-specific dependencies and building from source on platforms like Windows, leading to installation headaches as warned in the installation note.
Async training is not supported for recurrent networks (RNN, LSTM, GRU, and time-step variants), restricting performance optimizations in browser environments, as noted in the async training section.
Lacks support for convolutional neural networks (CNNs) and attention-based models like transformers, limiting its use for computer vision or state-of-the-art NLP tasks.