A lightweight probabilistic programming library using NumPy and JAX for autograd and JIT compilation to GPU/TPU/CPU.
NumPyro is a probabilistic programming library that provides a NumPy backend for Pyro, using JAX for automatic differentiation and JIT compilation. It allows users to specify Bayesian models with Pyro primitives and perform efficient inference via MCMC and variational methods on hardware accelerators like GPUs and TPUs.
Data scientists, statisticians, and machine learning researchers who need flexible Bayesian modeling with high-performance inference on modern hardware.
NumPyro combines the familiar Pyro API with JAX's speed and hardware acceleration, offering a lightweight yet powerful platform for probabilistic programming that scales from CPUs to GPUs and TPUs.
Probabilistic programming with NumPy powered by JAX for autograd and JIT compilation to GPU/TPU/CPU.
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Leverages JAX for automatic differentiation and JIT compilation, enabling fast inference on GPUs and TPUs, as shown in the README with optimized kernels for HMC.
Uces Pyro primitives like 'sample' and 'param', making it easy for Pyro users to transition, with similar distributions and inference algorithms.
Supports a range of methods including NUTS, variational inference, and handling of discrete latent variables via MixedHMC or enumeration.
Provides handlers to implement custom inference algorithms, allowing advanced users to extend functionality beyond built-in methods.
READme explicitly states limited Windows support, requiring WSL or manual building from source, which adds setup complexity.
The README warns of brittleness, bugs, and API changes due to ongoing development, making it less stable for long-term projects.
Requires models to be written in a functional style compatible with JAX, which can be a barrier for users accustomed to imperative PyTorch code.
Compared to PyTorch-based Pyro, has a smaller community and fewer third-party integrations, such as neural network libraries requiring stax or flax.