4.8 Article

Adaptive Fusion and Category-Level Dictionary Learning Model for Multiview Human Action Recognition

Journal

IEEE INTERNET OF THINGS JOURNAL
Volume 6, Issue 6, Pages 9280-9293

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2019.2911669

Keywords

Adaptive fusion; category-level dictionary learning; improved dense trajectory (iDT); induced set; multiview action recognition

Funding

  1. National Natural Science Foundation of China [61872270, 61572357]
  2. Tianjin Municipal Natural Science Foundation [18JCYBJC85500]

Ask authors/readers for more resources

Human actions are often captured by multiple cameras (or sensors) to overcome the significant variations in viewpoints, background clutter, object speed, and motion patterns in video surveillance, and action recognition systems often benefit from fusing multiple types of cameras (sensors). Therefore, adaptive fusion of the information from multiple domains is mandatory for multiview human action recognition. Two widely applied fusion schemes are feature-level fusion and score-level fusion. We point out that limitations still exist and there is tremendous room for improvement, including the separate computation of feature fusion and action recognition, or the fixed weights for each action and each camera. However, previous fusion methods cannot accomplish them. In this paper, inspired by nature, the above limitations are addressed for multiview action recognition by developing a novel adaptive fusion and category-level dictionary learning model (abbreviated to AFCDL). It can jointly learn the adaptive weight for each camera and optimize the reconstruction of samples toward the action recognition task. To induce the dictionary learning and the reconstruction of query set (or test samples), the induced set for each category is built, and the corresponding induced regularization term is designed for the objective function. Extensive experiments on four public multiview action benchmarks show that AFCDL can significantly outperforms the state-of-the-art methods with 3% to 10% improvement in recognition accuracy.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available