4.7 Article

Multimodal spatiotemporal skeletal kinematic gait feature fusion for vision-based fall detection

期刊

EXPERT SYSTEMS WITH APPLICATIONS
卷 212, 期 -, 页码 -

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.eswa.2022.118681

关键词

Fall risk; Gait patterns; Multimodal feature fusion; Spatiotemporal features; Spotted hyena optimizer; STGCN and 1D-CNN

向作者/读者索取更多资源

The objective of this study is to develop a Multimodal SpatioTemporal Skeletal Kinematic Gait Feature Fusion classifier for fall detection, using video data. The proposed framework combines features generated by SpatioTemporal Graph Convolution Network and 1D-CNN network model, and achieves high classification accuracy on two fall datasets.
Fall happens when a person's movement coordination is disturbed, forcing them to rest on the ground unin-tentionally causing serious health risks. The objective of this work is to develop a Multimodal SpatioTemporal Skeletal Kinematic Gait Feature Fusion (MSTSK-GFF) classifier for detecting fall using video data. The walking pattern of an individual is referred to as gait. The event of fall recorded in video shows discrepancies and ir-regularities in gait patterns. Analysis of these patterns plays a vital role in the identification of fall risk. However, assessment of the gait patterns from video data remains challenging due to its spatial and temporal feature dependencies. The proposed MSTSK-GFF framework presents a multimodal feature fusion process that over-comes these challenges and generates two sets of spatiotemporal kinematic gait features using SpatioTemporal Graph Convolution Network (STGCN) and 1D-CNN network model. These two generated feature sets are com-bined using concatenative feature fusion process and classification model is constucted for detecting fall. For optimizing the network weights, a bio-inspired spotted hyena optimizer is applied during training process. Finally, performance of the classification model is evaluated and compared to detect fall in videos. The proposed work is experimented with the two vision-based fall datasets namely, UR Fall Detection (URFD) dataset and self -build dataset. The experimental outcome proves the effectiveness of MSTSK-GFF in terms of its classification accuracy of 96.53% and 95.80% with two datasets when compared with existing state-of-the-art techniques.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据