WiFi DensePose turns commodity WiFi signals into real-time human pose estimation, vital sign monitoring, and presence detection without cameras.
RuView is an open-source WiFi sensing platform that transforms standard WiFi signals into actionable spatial intelligence. It uses Channel State Information (CSI) from low-cost ESP32 hardware to perform real-time human pose estimation, monitor breathing and heart rates, detect presence, and track activity—all without cameras or wearable sensors. The system works through walls and in complete darkness, solving privacy and deployment challenges associated with traditional optical sensing.
Developers, researchers, and organizations building privacy-sensitive monitoring solutions for healthcare, smart buildings, retail analytics, industrial safety, and search-and-rescue applications. It's particularly valuable for those needing contactless sensing in environments where cameras are impractical or prohibited.
RuView offers a unique combination of privacy preservation, low-cost hardware requirements, and through-wall capability that camera-based systems cannot match. Its edge-native architecture eliminates cloud dependencies and recurring fees, while its self-learning AI adapts to new environments without manual configuration or labeled data.
π RuView: WiFi DensePose turns commodity WiFi signals into real-time human pose estimation, vital sign monitoring, and presence detection — all without a single pixel of video.
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Uses only WiFi signals to avoid cameras and associated privacy regulations like GDPR and HIPAA, as highlighted in the 'Privacy-First' feature description.
WiFi penetrates walls, furniture, and debris, enabling sensing where cameras cannot, explicitly stated in the 'Through-Wall Operation' key feature.
Runs on $9 ESP32-S3 nodes with optional Cognitum Seed, making it affordable for widespread use, as detailed in the hardware options table and cost benchmarks.
Adapts to environments using contrastive learning and spiking neural networks without labeled training data, noted in the 'Self-Learning' feature and ADR-024.
Processes signals in under 100 microseconds per frame for live monitoring, as emphasized in the 'Real-Time Performance' feature and speed benchmarks.
APIs and firmware are under active development and may change, with the README warning of 'known limitations' and potential breaking updates.
Requires specific ESP32-S3 hardware; ESP32-C3 and original ESP32 are unsupported, and single-node deployments have limited spatial resolution, necessitating multiple nodes or Cognitum Seed for best results.
Camera-free pose accuracy is limited; achieving 92.9% PCK@20 requires camera ground-truth training, adding complexity and extra hardware as admitted in the beta notes.
Full capabilities like persistent storage and cryptographic attestation depend on the proprietary Cognitum Seed, creating a vendor dependency that may limit flexibility.