4.6 Article

Conflux LSTMs Network: A Novel Approach for Multi-View Action Recognition

期刊

NEUROCOMPUTING
卷 435, 期 -, 页码 321-329

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2019.12.151

关键词

Artificial intelligence; Deep learning; Action recognition; Multi-view video analytics; Sequence learning; LSTM; CNN; Multi-view action recognition

资金

  1. National Research Foundation of Korea (NRF) - Korea government (MSIT) [2019R1A2B5B01070067]

向作者/读者索取更多资源

The paper introduces a conflux long short-term memory (LSTMs) network for action recognition from multi-view cameras. By utilizing four major steps, the framework successfully extracts features from different views for effective action recognition, resulting in performance improvement in experimental results.
Multi-view action recognition (MVAR) is an optimal technique to acquire numerous clues from different views data for effective action recognition, however, it is not well explored yet. There exist several challenges to MVAR domain such as divergence in viewpoints, invisible regions, and different scales of appearance in each view require better solutions for real world applications. In this paper, we present a conflux long short-term memory (LSTMs) network to recognize actions from multi-view cameras. The proposed framework has four major steps; 1) frame level feature extraction, 2) its propagation through conflux LSTMs network for view self-reliant patterns learning, 3) view inter-reliant patterns learning and correlation computation, and 4) action classification. First, we extract deep features from a sequence of frames using a pre-trained VGG19 CNN model for each view. Second, we forward the extracted features to conflux LSTMs network to learn the view self-reliant patterns. In the next step, we compute the inter-view correlations using the pairwise dot product from output of the LSTMs network corresponding to different views to learn the view inter-reliant patterns. In the final step, we use flatten layers followed by SoftMax classifier for action recognition. Experimental results over benchmark datasets compared to state-of-the-art report an increase of 3% and 2% on northwestern-UCLA and MCAD datasets, respectively. (c) 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据