pp. 3239-3255
S&M3730 Research Paper of Special Issue https://doi.org/10.18494/SAM4786 Published in advance: January 30, 2024 Published: August 8, 2024 Healthcare System from Multisensor Collaboration and Human Action Recognition [PDF] Hongwei Gao, Xuna Wang, Zide Liu, and Yueqiu Jiang (Received November 21, 2023; Accepted January 5, 2024) Keywords: IoT, action recognition, pose estimation, remote healthcare
Over the past few decades, wearable sensor technology has played a pivotal role in patient information acquisition. A new paradigm for unconstrained medical data collection emerged with noncontact sensors such as Kinect and industrial-grade RGB cameras. These innovations have enhanced patient experiences and offer vast potential for long-term monitoring, telemedicine, and remote healthcare in underserved areas. However, while many research efforts have zeroed in on specific components, such as sensor development or algorithm design, this has occasionally led to noncontact systems’ reliability, accuracy, and connectivity challenges. Consequently, there is a pressing need for research that adopts a more holistic approach, ensuring the optimal integration of sensors and algorithms. In this study, we introduce a method of constructing a noncontact diagnostic system powered by deep learning vision algorithms that showcase strong resilience against viewpoint changes and obstructions in human motion identification and assessment. Using four RGB cameras, we capture human dynamics and leverage a pose estimator to generate comprehensive 3D human postures. Afterward, these postures are refined to aid in subsequent behavior prediction. Our multitask-trained model significantly bolsters the system’s adaptability to posture discrepancies. Notably, this noncontact diagnostic system thrives in challenging environments, such as 360-degree surveillance, intricate situations, and low-light settings where traditional sensors often falter. In addition, we have assembled a multiview autism behavior dataset. Through it, our embedded deep learning algorithm showcases exemplary action category recognition (reaching up to 95.09%), highlighting further its practical implications.
Corresponding author: Hongwei Gao and Xuna WangThis work is licensed under a Creative Commons Attribution 4.0 International License. Cite this article Hongwei Gao, Xuna Wang, Zide Liu, and Yueqiu Jiang, Healthcare System from Multisensor Collaboration and Human Action Recognition, Sens. Mater., Vol. 36, No. 8, 2024, p. 3239-3255. |