Journal
SENSORS
Volume 21, Issue 12, Pages -Publisher
MDPI
DOI: 10.3390/s21124246
Keywords
action recognition; deep learning; data fusion; RGB-D
Funding
- Higher Education Commission (HEC) Pakistan [No.5-1/HRD/UESTPI(BatchVI)/7108/2018/HEC]
- Edith Cowan University (ECU) Australia
Ask authors/readers for more resources
This review focuses on data fusion and recognition techniques in the context of vision with an RGB-D perspective, highlighting the distinct characteristics of different action-data modalities. Research challenges, emerging trends, and possible future research directions are also discussed.
Classification of human actions is an ongoing research problem in computer vision. This review is aimed to scope current literature on data fusion and action recognition techniques and to identify gaps and future research direction. Success in producing cost-effective and portable vision-based sensors has dramatically increased the number and size of datasets. The increase in the number of action recognition datasets intersects with advances in deep learning architectures and computational support, both of which offer significant research opportunities. Naturally, each action-data modality-such as RGB, depth, skeleton, and infrared (IR)-has distinct characteristics; therefore, it is important to exploit the value of each modality for better action recognition. In this paper, we focus solely on data fusion and recognition techniques in the context of vision with an RGB-D perspective. We conclude by discussing research challenges, emerging trends, and possible future research directions.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available