Original DeepMind DQN 3.0 implementation for Atari game reinforcement learning, with community tweaks.
DeepMind-Atari-Deep-Q-Learner is the original DeepMind DQN 3.0 implementation for training deep reinforcement learning agents to play Atari games. It provides the exact code from the landmark 2015 Nature paper that demonstrated human-level control through deep Q-networks. The repository includes community tweaks and tools to replicate the original experiments.
Reinforcement learning researchers and practitioners interested in studying historical DQN implementations or reproducing the original DeepMind Atari experiments.
This project offers the authentic DQN 3.0 codebase directly from DeepMind's groundbreaking research, providing an important reference implementation for the RL community despite newer algorithms being available.
The original code from the DeepMind article + my tweaks
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Provides the exact DQN 3.0 implementation from the 2015 Nature paper, ensuring faithful reproduction of DeepMind's landmark experiments.
Includes CUDA-enabled training scripts (run_gpu) that significantly speed up computation, as tested on hardware like the NVIDIA GTX 970.
Comes with scripts to generate gameplay GIFs from trained network snapshots, aiding in result analysis and presentation, as shown in the provided GIF examples.
Offers installation scripts and dependencies like Xitari and AleWrap to fully replicate the original Atari experiments, detailed in the README.
Relies on Lua and Torch 7.0, which are less maintained and integrated compared to modern Python-based RL libraries, limiting community support and updates.
Requires manual setup of multiple components via install_dependencies.sh, which is tailored for Ubuntu 14.04 and can be error-prone on newer systems.
Uses the original DQN algorithm, which is less sample-efficient than modern methods, and is restricted to Atari games without easy extension to other environments.