4.7 Review

Vision and Inertial Sensing Fusion for Human Action Recognition: A Review

Journal

IEEE SENSORS JOURNAL
Volume 21, Issue 3, Pages 2454-2467

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSEN.2020.3022326

Keywords

Cameras; Image recognition; Robot sensing systems; Feature extraction; Wearable sensors; Skeleton; Fusion of vision and inertial sensing for action recognition; multimodality action recognition; improving recognition accuracy in action recognition

Ask authors/readers for more resources

This article provides a survey of papers where vision and inertial sensing are used together in a fusion framework for human action recognition. The surveyed papers are categorized based on fusion approaches, features, classifiers, and multimodality datasets. Challenges and possible future directions for deploying the fusion of these two sensing modalities under realistic conditions are also discussed.
Human action recognition is used in many applications such as video surveillance, human-computer interaction, assistive living, and gaming. Many papers have appeared in the literature showing that the fusion of vision and inertial sensing improves recognition accuracies compared to the situations when each sensing modality is used individually. This article provides a survey of the papers in which vision and inertial sensing are used simultaneously within a fusion framework in order to perform human action recognition. The surveyed papers are categorized in terms of fusion approaches, features, classifiers, as well as multimodality datasets considered. Challenges as well as possible future directions are also stated for deploying the fusion of these two sensing modalities under realistic conditions.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available