A fast, differentiable physics engine built with JAX for massively parallel rigid body simulation on accelerator hardware.
Brax is a fast, fully differentiable physics engine built with JAX for simulating rigid body dynamics. It is designed for research and development in robotics, reinforcement learning, human perception, and materials science, enabling efficient simulation on single devices and massively parallel execution across accelerators like TPUs and GPUs.
Researchers and developers in robotics, reinforcement learning, and simulation-heavy fields who need high-speed, differentiable physics for experiments, particularly those using JAX and hardware accelerators.
Brax offers massively parallel simulation with full differentiability, allowing gradient-based optimization and rapid training of agents, while providing multiple interchangeable physics pipelines (MJX, Generalized, Positional, Spring) under a unified API for flexibility and transfer learning.
Massively parallel rigidbody physics simulation on accelerator hardware.
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Brax simulates millions of physics steps per second on TPUs and scales across multiple accelerators, enabling rapid experimentation at scale without complex infrastructure.
The engine supports gradient-based optimization, allowing learning algorithms like analytic policy gradients to leverage simulator gradients for efficient training and optimization.
Offers four interchangeable pipelines (MJX, Generalized, Positional, Spring) with a unified API, facilitating transfer learning and diverse simulation needs in a single framework.
Includes baseline algorithms such as PPO, SAC, and ARS, with training that can complete in seconds to minutes, streamlining reinforcement learning workflows directly within the engine.
Only `brax/training` is actively maintained; environments (`brax/envs`) are deprecated in favor of MuJoCo Playground, limiting out-of-the-box simulation options and stability.
Optimal performance requires JAX with GPU or TPU support, involving complex setup like CUDA installation, which can be a barrier for users without access to accelerators.
The project may repurpose as a pure RL library, leading to potential breaking changes and reduced support for physics simulation features, as indicated in the README warning.