3.8 Proceedings Paper

IntegralAction: Pose-driven Feature Integration for Robust Human Action Recognition in Videos

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPRW53098.2021.00372

Keywords

-

Funding

  1. NRF [NRF-2017R1E1A1A01077999 -50%]
  2. Visual Turing Test project [IITP-2017-0-01780 -50%]
  3. IITP - Ministry of Science and ICT of Korea [2019-0-01906]

Ask authors/readers for more resources

Current action recognition methods heavily rely on appearance or pose information. Integrating these two approaches is challenging, but a proposed pose-driven feature integration method achieves highly robust performance on action video datasets.
Most current action recognition methods heavily rely on appearance information by taking an RGB sequence of entire image regions as input. While being effective in exploiting contextual information around humans, e.g., human appearance and scene category, they are easily fooled by out-of-context action videos where the contexts do not exactly match with target actions. In contrast, pose-based methods, which take a sequence of human skeletons only as input, suffer from inaccurate pose estimation or ambiguity of human pose per se. Integrating these two approaches has turned out to be non-trivial; training a model with both appearance and pose ends up with a strong bias towards appearance and does not generalize well to unseen videos. To address this problem, we propose to learn pose-driven feature integration that dynamically combines appearance and pose streams by observing pose features on the fly. The main idea is to let the pose stream decide how much and which appearance information is used in integration based on whether the given pose information is reliable or not. We show that the proposed IntegralAction achieves highly robust performance across in-context and out-of-context action video datasets. The codes are available in here.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available