4.8 Article

DMSTL: A Deep Multi-Scale Transfer Learning Framework for Unsupervised Cross-Position Human Activity Recognition

期刊

IEEE INTERNET OF THINGS JOURNAL
卷 10, 期 1, 页码 787-800

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2022.3204542

关键词

Feature extraction; Transfer learning; Data models; Adaptation models; Wearable sensors; Task analysis; Recurrent neural networks; Deep learning; human activity recognition (HAR); transfer learning; unsupervised domain adaptation

向作者/读者索取更多资源

Human activity recognition (HAR) based on wearable sensors is a popular research topic. Obtaining labeled human activity data for different body-worn positions is expensive and labor-intensive, leading to poor performance of HAR models on different body positions. In this article, we propose a deep multiscale transfer learning (DMSTL) model for accurate HAR with low labeling cost. The model includes an unsupervised source selection method, a multiscale spatial-temporal Net (MSSTNet) for comprehensive multimodal representations, and category-level adaptation and domain-level adversarial modules for learning domain-invariant features. Experimental results on three public HAR datasets show that DMSTL outperforms other baselines.
Human activity recognition (HAR) based on wearable sensors has been a prosperous research topic in recent years. Considering that the sensor may be worn at diverse body positions, obtaining enough labeled human activity data for each body-worn position is usually expensive and labor-intensive. Furthermore, the variability and diversity of data distribution induced by different body-worn positions make the HAR model trained on data collected from one body position perform poorly for other body-worn positions. To achieve accurate HAR with low labeling cost, which we call unsupervised cross-position HAR, in this article, we propose a deep multiscale transfer learning (DMSTL) model. In the model, we first introduce an unsupervised source selection method to select the most similar source domain for transferring domain knowledge. Then, we develop a multiscale spatial-temporal Net (MSSTNet) to learn comprehensive multimodal representations from multiple feature subspaces. Finally, we design the category-level adaptation module and the domain-level adversarial module for learning domain-invariant features. We conduct extensive experiments on three public HAR data sets and demonstrate the reasonable generalization performance of the DMSTL, which remarkably outperforms other state-of-the-art baselines.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据