A Python library for composable transformations of numerical programs: automatic differentiation, vectorization, and JIT compilation to GPU/TPU.
JAX is a Python library for accelerator-oriented array computation and program transformation. It provides composable transformations like automatic differentiation, vectorization, and JIT compilation to scale NumPy programs on TPUs, GPUs, and other hardware accelerators. It solves the problem of writing high-performance numerical and machine learning code that can efficiently leverage modern hardware.
Researchers and engineers working on large-scale numerical computing, machine learning, and scientific simulations who need automatic differentiation, GPU/TPU acceleration, and scalable parallelism.
Developers choose JAX for its unique combination of a familiar NumPy-like API with a powerful, composable transformation system that enables automatic differentiation, efficient compilation via XLA, and seamless scaling across accelerators without rewriting code.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
JAX allows arbitrary composition of grad, jit, and vmap, enabling complex numerical pipelines without code rewrites, as demonstrated in the per-example gradient example combining all three.
It compiles pure functions with jit using XLA for efficient execution on TPUs, GPUs, and other accelerators, with benchmarks showing significant speedups over plain NumPy.
Supports reverse-mode and forward-mode differentiation through native Python control flow like loops and branches, enabling gradients of derivatives to any order.
Offers compiler-based automatic parallelization, explicit sharding, and manual per-device programming to scale across thousands of devices, as detailed in the scaling section.
Installation tables show limited or experimental support for Windows GPUs, AMD GPUs, and Apple GPUs, requiring extra steps and lacking robustness.
JIT compilation restricts Python control flow and side effects, forcing code rewrites for dynamic behaviors, as noted in the control flow tutorial.
Understanding transformations, sharding, and XLA intricacies requires significant expertise, with the README admitting sharp edges and a dedicated gotchas notebook.