4.8 Article

Hierarchical Long Short-Term Concurrent Memory for Human Interaction Recognition

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2019.2942030

Keywords

Dynamics; Videos; Logic gates; Deep learning; Task analysis; Pattern recognition; Feeds; Human interaction recognition; long short-term memory; activity recognition; deep learning

Funding

  1. National Key Research and Development Program of China [2016YFB1001001]
  2. National Natural Science Foundation of China [61732007, 61702265, 61672285, 61772268]
  3. National Natural Science Foundation of Jiangsu Province [BK20170856]

Ask authors/readers for more resources

This work proposes a novel Hierarchical Long Short-Term Concurrent Memory (H-LSTCM) model for recognizing human interactions in videos by combining individual dynamics and group dynamics to capture the long-term inter-related dynamics of human interactions. Experimental results validate the effectiveness of the proposed model.
In this work, we aim to address the problem of human interaction recognition in videos by exploring the long-term inter-related dynamics among multiple persons. Recently, Long Short-Term Memory (LSTM) has become a popular choice to model individual dynamic for single-person action recognition due to its ability to capture the temporal motion information in a range. However, most existing LSTM-based methods focus only on capturing the dynamics of human interaction by simply combining all dynamics of individuals or modeling them as a whole. Such methods neglect the inter-related dynamics of how human interactions change over time. To this end, we propose a novel Hierarchical Long Short-Term Concurrent Memory (H-LSTCM) to model the long-term inter-related dynamics among a group of persons for recognizing human interactions. Specifically, we first feed each person's static features into a Single-Person LSTM to model the single-person dynamic. Subsequently, at one time step, the outputs of all Single-Person LSTM units are fed into a novel Concurrent LSTM (Co-LSTM) unit, which mainly consists of multiple sub-memory units, a new cell gate, and a new co-memory cell. In the Co-LSTM unit, each sub-memory unit stores individual motion information, while this Co-LSTM unit selectively integrates and stores inter-related motion information between multiple interacting persons from multiple sub-memory units via the cell gate and co-memory cell, respectively. Extensive experiments on several public datasets validate the effectiveness of the proposed H-LSTCM by comparing against baseline and state-of-the-art methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available