Human Activity Recognition (HAR) is at the forefront of technological innovation, bridging the gap between human behavior and machine understanding. Our research aims to unlock the potential of machines to perceive and interpret human movements, creating a more intuitive and human-centric approach to technology. From enhancing healthcare through patient monitoring to improving security without compromising privacy, and creating more immersive entertainment experiences, HAR’s applications are diverse and transformative, reshaping how we interact with the world around us.
Why LiDAR?
The field of HAR has evolved rapidly, but significant challenges remain. Traditional sensor technologies often require specific illumination conditions, limiting their effectiveness in diverse environments. Privacy is another major concern, as conventional methods may inadvertently capture sensitive information. Furthermore, biases in data collection and processing can lead to discrimination, undermining the fairness and inclusivity of human activity recognition. These limitations underscore the need for innovative HAR solutions that are not only accurate and adaptable but also conscientious of privacy and ethical considerations.
Deep Insight: Unraveling Human Motion with LiDAR and Deep Learning
Our research leverages cutting-edge LiDAR sensors and state-of-the-art deep-learning architectures to capture and analyze human movements. The process includes utilizing LiDAR sensors to gather precise point cloud data, filtering and segmenting the data to isolate human activity, and applying deep learning models for pose estimation, classification, and segmentation.
Harnessing the Power of Data: Experimental and Simulated Data Collection for Human Activity Recognition
To achieve cutting-edge human activity recognition, a rich repository of quality data is indispensable. In our project, we embrace a multi-pronged approach to data collection to feed our deep-learning algorithms. On one hand, we engage in actual experimental setups where we utilize LiDAR sensors to capture real-world interactions and movements. On the other hand, we employ state-of-the-art simulators such as Webots and Blender to generate synthetic yet highly realistic data. These simulated environments allow us to model complex scenarios that are hard to replicate in a controlled setting, thus enriching our dataset. By fusing actual experimental data with simulated information, we create a well-rounded and comprehensive database, setting the stage for robust and reliable human activity recognition models.
Refining Human Activity Recognition: LiDAR-Enabled Body Part Segmentation and Orientation Estimation
In advancing human activity recognition technologies, the segmentation and accurate identification of human body parts are paramount, particularly for applications in healthcare and motion analysis. Our study introduces a sophisticated LiDAR-based system capable of high-precision body part segmentation and orientation estimation. Utilizing a novel two-stage deep learning architecture, we first estimate the orientation of the human subject, which significantly enhances the accuracy of subsequent body part segmentation. This system not only overcomes the limitations imposed by traditional RGB and RGB-D sensors, such as privacy concerns and environmental dependency, but also demonstrates superior performance in diverse settings. By producing detailed three-dimensional mappings of human anatomy without the need for ambient light or color information, our approach ensures robust and privacy-preserving analysis, paving the way for next-generation monitoring and interaction systems. This work sets a new benchmark for non-intrusive and adaptable human body analysis, offering significant improvements over conventional methods in both accuracy and functionality.
Selected Publications:
- B. Cherif, O. Rinchi, A. Alsharoa, and G. Hakim, “LiDAR-enabled human orientation estimation and body part segmentation,” IEEE Transactions on Instrumentation and Measurement, Apr. 2024, submitted work [under review].
- N. Guefrachi, J. Shi, H. Ghazzai, and A. Alsharoa, “Leveraging 3D LiDAR Sensors to Enable Enhanced Urban Safety and Public Health: Pedestrian Monitoring and Abnormal Activity Detection” in In Proc. of the 46-th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC’24), Orlando, Florida, July 2024.
- O. Rinchi, A. Ahmad, and D. Baker, “Remote breathing monitoring using lidar technology,” in In Proc. of the 46-th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC’24), Orlando, Florida, July 2024.
- O. Rinchi, H. Ghazzai, A. Alsharoa and Y. Massoud, “LiDAR Technology for Human Activity Recognition: Outlooks and Challenges,” in IEEE Internet of Things Magazine, vol. 6, no. 2, pp. 143-150, June 2023.
- O. Rinchi, N. Nisbett and A. Alsharoa, “Patients Arms Segmentation and Gesture Identification Using Standalone 3D LiDAR Sensors,” in IEEE Sensors Letters.