4.8 Article

DMSTL: A Deep Multi-Scale Transfer Learning Framework for Unsupervised Cross-Position Human Activity Recognition

Journal

IEEE INTERNET OF THINGS JOURNAL
Volume 10, Issue 1, Pages 787-800

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2022.3204542

Keywords

Feature extraction; Transfer learning; Data models; Adaptation models; Wearable sensors; Task analysis; Recurrent neural networks; Deep learning; human activity recognition (HAR); transfer learning; unsupervised domain adaptation

Ask authors/readers for more resources

Human activity recognition (HAR) based on wearable sensors is a popular research topic. Obtaining labeled human activity data for different body-worn positions is expensive and labor-intensive, leading to poor performance of HAR models on different body positions. In this article, we propose a deep multiscale transfer learning (DMSTL) model for accurate HAR with low labeling cost. The model includes an unsupervised source selection method, a multiscale spatial-temporal Net (MSSTNet) for comprehensive multimodal representations, and category-level adaptation and domain-level adversarial modules for learning domain-invariant features. Experimental results on three public HAR datasets show that DMSTL outperforms other baselines.
Human activity recognition (HAR) based on wearable sensors has been a prosperous research topic in recent years. Considering that the sensor may be worn at diverse body positions, obtaining enough labeled human activity data for each body-worn position is usually expensive and labor-intensive. Furthermore, the variability and diversity of data distribution induced by different body-worn positions make the HAR model trained on data collected from one body position perform poorly for other body-worn positions. To achieve accurate HAR with low labeling cost, which we call unsupervised cross-position HAR, in this article, we propose a deep multiscale transfer learning (DMSTL) model. In the model, we first introduce an unsupervised source selection method to select the most similar source domain for transferring domain knowledge. Then, we develop a multiscale spatial-temporal Net (MSSTNet) to learn comprehensive multimodal representations from multiple feature subspaces. Finally, we design the category-level adaptation module and the domain-level adversarial module for learning domain-invariant features. We conduct extensive experiments on three public HAR data sets and demonstrate the reasonable generalization performance of the DMSTL, which remarkably outperforms other state-of-the-art baselines.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available