Open-Awesome
CategoriesAlternativesStacksSelf-HostedExplore
Open-Awesome

© 2026 Open-Awesome. Curated for the developer elite.

TermsPrivacyAboutGitHubRSS
  1. Home
  2. Robotic Tooling
  3. GibsonEnv

GibsonEnv

MITC

A virtual environment simulator for training embodied AI agents with real-world perception and physics, featuring domain transfer to real robots.

Visit WebsiteGitHubGitHub
939 stars150 forks0 contributors

What is GibsonEnv?

Gibson Environment is a virtual simulation platform for training embodied AI agents, such as robots, in realistic 3D environments scanned from the real world. It addresses the challenges of costly and slow real-world robot training by providing a fast, scalable simulator with integrated physics and perception. The platform includes a domain adaptation mechanism called Goggles to help transfer learned policies from simulation to real robots.

Target Audience

Researchers and developers working on embodied AI, reinforcement learning for robotics, and computer vision for autonomous agents. It is particularly useful for those needing high-fidelity simulation with real-world scene complexity.

Value Proposition

Gibson offers unique real-world scene datasets, a built-in domain transfer capability (Goggles), and high-performance rendering, making it a preferred choice for simulation-to-real-world robotics research over generic game engines or simpler simulators.

Overview

Gibson Environments: Real-World Perception for Embodied Agents

Use Cases

Best For

  • Training reinforcement learning agents for navigation in complex indoor environments
  • Simulating robotic perception and control with realistic physics constraints
  • Research on domain adaptation and sim-to-real transfer for robotics
  • Developing and testing autonomous navigation algorithms for mobile robots
  • Studying active perception and sensorimotor control in AI agents
  • Benchmarking embodied AI algorithms in semantically rich 3D spaces

Not Ideal For

  • Projects without access to high-end Nvidia GPUs or requiring CPU-only simulation
  • Applications needing dynamic outdoor environments or non-indoor scenarios beyond static building scans
  • Teams seeking a plug-and-play simulator with minimal configuration and no hardware-specific dependencies
  • Users who require up-to-date machine learning library support without legacy framework versions

Pros & Cons

Pros

Real-World Scene Fidelity

Uses 3D scans of 572 real buildings to create diverse, semantically rich indoor environments, providing realistic training spaces that mirror actual locations.

Built-in Sim-to-Real Transfer

Includes the Goggles function, a learned domain adaptation mechanism that alters real camera inputs to match simulation, facilitating policy transfer to real-world robots as described in the README.

High-Performance Rendering

Benchmarks show high frame rates for RGBD, depth, and semantic rendering, with multi-process scaling that supports efficient training, as detailed in the FPS tables.

Physics and Embodiment Integration

Integrates the Bullet physics engine to simulate realistic agent movement and constraints, supporting various robotic agents like Husky, Ant, and Humanoid with different controllers.

Cons

Steep Hardware and Setup Requirements

Requires Nvidia GPU with VRAM > 6GB, specific CUDA and driver versions, and installation is complex via Docker or source build, as outlined in the system requirements section.

Limited Environmental Diversity

Primarily focuses on static indoor environments from 3D scans, lacking support for outdoor scenes or dynamically changing environments, which may not suit all robotics applications.

Outdated Dependencies

Relies on older versions of deep learning libraries like TensorFlow 1.3 and PyTorch 0.3.1, which could cause compatibility issues with modern ML frameworks and require extra setup.

Frequently Asked Questions

Quick Stats

Stars939
Forks150
Contributors0
Open Issues47
Last commit2 years ago
CreatedSince 2017

Tags

#robotics#simulator#deep-learning#sim2real#ai-training#simulation-environment#deep-reinforcement-learning#ros#research#domain-adaptation#embodied-ai#computer-vision#physics-simulation#reinforcement-learning

Built With

T
TensorFlow
O
OpenCV
R
ROS
P
Python
D
Docker
P
PyTorch

Links & Resources

Website

Included in

Robotic Tooling3.8k
Auto-fetched 9 hours ago

Related Projects

multiple-object-tracking-lidarmultiple-object-tracking-lidar

C++ implementation to Detect, track and classify multiple objects using LIDAR scans or point cloud

Stars884
Forks228
Last commit3 years ago
Community-curated · Updated weekly · 100% open source

Found a gem we're missing?

Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.

Submit a projectStar on GitHub