4.7 Article

A Novel Deep Multifeature Extraction Framework Based on Attention Mechanism Using Wearable Sensor Data for Human Activity Recognition

期刊

IEEE SENSORS JOURNAL
卷 23, 期 7, 页码 7188-7198

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSEN.2023.3242603

关键词

Attention mechanism; deep learning; human activity recognition (HAR); multifeature extraction

向作者/读者索取更多资源

Human activity recognition using wearable sensors has various applications but faces challenges such as incomplete feature extraction and low utilization rate of features. To address this, a novel deep multifeature extraction framework based on attention mechanism (DMEFAM) is proposed, achieving high recognition accuracies of 97.9%, 96.0%, and 99.2% on the WISDM, UCI-HAR, and DAAD datasets respectively, outperforming other advanced HAR frameworks.
Human activity recognition (HAR) utilizing wearable sensors is a prominent topic in academia and has been widely used in health monitoring, medical treatment, motion analysis, and other fields. Although HAR technology based on deep learning has made some progress, there are still some problems, such as incomplete feature extraction and low utilization rate of features, which may result in false recognition. To settle the problems, we propose a novel deep multifeature extraction framework based on attention mechanism (DMEFAM), which includes the temporal attention feature extraction layer (TAFEL), the channel and spatial attention feature extraction layer (CSAFEL), and the output layer. The TAFEL is comprised of bidirectional gated recurrent unit (Bi-GRU) and self-attention (SA) mechanism, and the CSAFEL is comprised of the convolutional block attention module (CBAM) and the residual network-18 (ResNet-18). In this framework, the combination of deep learning neural networks, SA mechanism, and CBAM assigns different important degrees to features, increases the diversities of feature extraction, and improves the accuracy of HAR. To improve the practicability of the proposed framework, a daily and aggressive activity dataset (DAAD) is collected by our laboratory for performance evaluation. The experiments are conducted on the wireless sensor data mining laboratory (WISDM) dataset, the University of California Irvine-HAR (UCI-HAR) dataset, and the collected DAAD dataset. The results show that the proposed DMEFAM can upgrade the recognition performance effectually compared to other advanced HAR frameworks, with the recognition accuracies of 97.9%, 96.0%, and 99.2% on the WISDM dataset, the UCI-HAR dataset, and the DAAD dataset, respectively.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据