4.6 Article

Improving Wearable-Based Activity Recognition Using Image Representations

Journal

SENSORS
Volume 22, Issue 5, Pages -

Publisher

MDPI
DOI: 10.3390/s22051840

Keywords

human activity recognition; image representation; CNNs; IMU; inertial sensors; wearable sensors

Funding

  1. LOEWE initiative (Hesse, Germany)
  2. Deutsche Forschungsgemeinschaft (DFG-German Research Foundation)
  3. Open Access Publishing Fund of Technical University of Darmstadt

Ask authors/readers for more resources

This paper proposes an approach to transform time-series data into images for activity recognition, aiming to address the reliance on complex deep learning models in current methods. Extensive evaluations show that the proposed approach outperforms existing techniques in all cases, while being easy to implement and extend.
Activity recognition based on inertial sensors is an essential task in mobile and ubiquitous computing. To date, the best performing approaches in this task are based on deep learning models. Although the performance of the approaches has been increasingly improving, a number of issues still remain. Specifically, in this paper we focus on the issue of the dependence of today's state-of-the-art approaches to complex ad hoc deep learning convolutional neural networks (CNNs), recurrent neural networks (RNNs), or a combination of both, which require specialized knowledge and considerable effort for their construction and optimal tuning. To address this issue, in this paper we propose an approach that automatically transforms the inertial sensors time-series data into images that represent in pixel form patterns found over time, allowing even a simple CNN to outperform complex ad hoc deep learning models that combine RNNs and CNNs for activity recognition. We conducted an extensive evaluation considering seven benchmark datasets that are among the most relevant in activity recognition. Our results demonstrate that our approach is able to outperform the state of the art in all cases, based on image representations that are generated through a process that is easy to implement, modify, and extend further, without the need of developing complex deep learning models.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available