A deep learning library for Ruby that provides a native interface to LibTorch, enabling GPU-accelerated neural network development.
Torch.rb is a deep learning library for Ruby that provides a native binding to LibTorch, the C++ backend of PyTorch. It enables Ruby developers to perform tensor computations, build neural networks, and train models with GPU acceleration, bringing PyTorch's capabilities to the Ruby ecosystem.
Ruby developers and data scientists who want to implement deep learning models within Ruby applications, without switching to Python for ML tasks.
It offers a Ruby-friendly API that closely mirrors PyTorch, allowing developers to leverage existing PyTorch tutorials and models while staying in a Ruby environment, with support for GPU acceleration and a growing ecosystem of companion libraries.
Deep learning for Ruby, powered by LibTorch
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Mirrors PyTorch's API with Ruby conventions like `!` for in-place methods and `?` for boolean checks, making it familiar for Ruby developers while maintaining compatibility with PyTorch tutorials.
Integrates with CUDA for Linux and Metal Performance Shaders for Apple silicon, enabling high-performance tensor computations on GPUs as highlighted in the performance section.
Seamlessly converts tensors to and from Numo arrays, allowing easy data manipulation within Ruby's scientific computing ecosystem.
Works with dedicated libraries like TorchVision and TorchText for vision, NLP, and other tasks, extending functionality beyond core tensor operations.
Requires manual download and configuration of LibTorch, including setting build flags and dealing with compilation that can take 5-10 minutes, adding friction to onboarding.
Explicitly lacks Windows support, limiting its usability for developers on that operating system without community workarounds.
Admits issues like needing parameter conversion when loading models saved in Python due to bugs in LibTorch, which can hinder seamless interoperability.