期刊
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
卷 45, 期 3, 页码 3200-3225出版社
IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2022.3183112
关键词
Feature extraction; Visualization; Skeleton; Optical imaging; Deep learning; Three-dimensional displays; Radar; Human action recognition; deep learning; data modality; single modality; multi-modality
Human Action Recognition (HAR) aims to understand human behavior and assign labels to actions. Various data modalities, such as RGB, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, radar, and WiFi signal, can be used to represent human actions. Many studies have investigated different approaches for HAR using these modalities. This article presents a comprehensive survey of recent progress in deep learning methods for HAR based on input data modality, covering single and multiple modalities and fusion-based and co-learning-based frameworks.
Human Action Recognition (HAR) aims to understand human behavior and assign a label to each action. It has a wide range of applications, and therefore has been attracting increasing attention in the field of computer vision. Human actions can be represented using various data modalities, such as RGB, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, radar, and WiFi signal, which encode different sources of useful yet distinct information and have various advantages depending on the application scenarios. Consequently, lots of existing works have attempted to investigate different types of approaches for HAR using various modalities. In this article, we present a comprehensive survey of recent progress in deep learning methods for HAR based on the type of input data modality. Specifically, we review the current mainstream deep learning methods for single data modalities and multiple data modalities, including the fusion-based and the co-learning-based frameworks. We also present comparative results on several benchmark datasets for HAR, together with insightful observations and inspiring future research directions.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据