A comprehensive Python-first reinforcement learning framework with modular abstractions for decision intelligence applications.
DI-engine is a comprehensive reinforcement learning framework designed as a generalized decision intelligence engine for PyTorch and JAX. It provides modular abstractions for environments, policies, and models to support a wide variety of deep RL algorithms, from basic methods like DQN and PPO to advanced areas like multi-agent systems, offline RL, and imitation learning. The framework solves the problem of fragmented RL implementations by offering a standardized, extensible foundation for both research and real-world applications.
Researchers and engineers working on decision intelligence problems, including those in reinforcement learning, autonomous systems, game AI, and multi-agent simulations. It's particularly valuable for teams needing a unified framework to experiment with diverse RL algorithms or deploy RL solutions in production.
Developers choose DI-engine for its exceptional algorithm coverage, python-first design, and system optimizations for large-scale training. Unlike many RL libraries, it seamlessly integrates academic research with practical applications through modular abstractions and supports real-world use cases like autonomous driving and StarCraft AI.
OpenDILab Decision AI Engine. The Most Comprehensive Reinforcement Learning Framework B.P.
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Supports over 50 algorithms, including DQN, PPO, multi-agent RL, offline RL, and imitation learning, as detailed in the comprehensive algorithm table with runnable demos.
Integrates with projects like DI-drive for autonomous driving and DI-star for game AI, and includes system optimizations such as Kubernetes orchestration for large-scale training.
Provides python-first, asynchronous-native abstractions for Env, Policy, and Model, allowing customizable pipelines and easy integration of new algorithms or environments.
Offers well-organized tutorials, API references, and best practices in both English and Chinese, hosted on Read the Docs with extensive testing and coverage badges.
Requires managing multiple sub-projects like treevalue and DI-treetensor, and installation involves Docker images or specific environment configurations, increasing initial overhead.
The modular system and extensive feature set demand significant time to master, especially for users unfamiliar with asynchronous programming or large-scale RL optimizations.
Relies on a fragmented ecosystem of companion libraries (e.g., DI-orchestrator, DI-store) that may require additional integration efforts and lack the maturity of unified alternatives.