An open-source benchmark suite of continuous control robotic manipulation environments for multi-task and meta reinforcement learning.
Meta-World is an open-source benchmark suite of continuous control robotic manipulation environments designed for evaluating multi-task and meta reinforcement learning algorithms. It provides standardized tasks like reaching, pushing, and door opening to test how well algorithms can learn multiple skills simultaneously or adapt to new tasks with limited experience. The project addresses the need for reproducible and challenging benchmarks in robotics RL research.
Reinforcement learning researchers and practitioners focusing on multi-task learning, meta-learning, and robotic manipulation, who need standardized environments to evaluate and compare algorithm performance.
Developers choose Meta-World for its comprehensive set of 50 manipulation tasks, strict adherence to the Gymnasium API for easy integration, and its specialized benchmarks (MT and ML series) that are widely recognized in the RL community for rigorous evaluation of multi-task and meta-learning capabilities.
Collections of robotics environments geared towards benchmarking multi-task and meta reinforcement learning
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Provides MT1, MT10, and MT50 benchmarks with one-hot task IDs appended to observations, enabling direct evaluation of policies learning multiple tasks simultaneously, as detailed in the Multi-Task Benchmarks section.
Follows the standard Gymnasium interface for environment creation and interaction, ensuring easy integration with existing RL libraries, as shown in the API examples using gym.make.
Supports both synchronous and asynchronous vectorized environments, allowing users to choose based on compute resources, illustrated in the MT10 and MT50 benchmark code snippets.
Allows researchers to build custom benchmarks by combining any selection of the 50 available tasks, offering flexibility for tailored experiments, as described in the Custom Benchmarks section.
Officially supports only Linux and macOS, with Windows support not guaranteed, as stated in the Installation section, which may hinder development on Windows systems.
Asynchronous environments require multi-process setup and inter-process communication, making benchmarks like MT50 resource-intensive, as noted in the asynchronous execution examples.
Meta-learning benchmarks such as ML10 require separate training and testing environments, adding complexity to the experimental pipeline compared to simpler single-task benchmarks.