Human Activity Recognition using TensorFlow and LSTM RNNs on smartphone sensor data to classify six movement types.
LSTM-Human-Activity-Recognition is an open-source project that implements a Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) for classifying human activities based on smartphone sensor data. It uses accelerometer and gyroscope readings to recognize six types of movements—such as walking, sitting, and standing—demonstrating how deep learning can automate feature extraction for time-series classification.
Machine learning practitioners, data scientists, and students interested in applying deep learning to time-series data, particularly those working on sensor-based activity recognition or sequence modeling with TensorFlow.
It provides a complete, well-documented implementation that achieves high accuracy with minimal feature engineering, serving as an educational example and practical baseline for HAR tasks using LSTMs.
Human Activity Recognition example using TensorFlow on smartphone sensors dataset and an LSTM RNN. Classifying the type of movement amongst six activity categories - Guillaume Chevalier
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Achieves up to 91% accuracy on the UCI HAR dataset by feeding raw sensor data directly into the LSTM, minimizing manual feature engineering as emphasized in the README's philosophy.
Provides an end-to-end implementation from data downloading to visualization, including training/testing curves and confusion matrices, making it a practical learning tool for LSTM RNNs.
Effectively showcases how LSTMs can capture sequential dependencies in time-series sensor data, with stacked LSTM cells processing 128-timestep windows without complex signal processing.
Built on TensorFlow 1.0, which is deprecated and incompatible with modern TensorFlow 2.x without rewriting graph definitions and training loops, as seen in the code snippets.
Tailored to the UCI HAR dataset's fixed preprocessing (e.g., 2.56-second sliding windows), making adaptation to other sensor formats or real-time streams non-trivial and error-prone.
As admitted in the conclusion, the model struggles to distinguish between similar movements like SITTING and STANDING, with a visible cluster in the confusion matrix indicating room for improvement.