4.8 Article

Symbiotic Attention for Egocentric Action Recognition With Object-Centric Alignment

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2020.3015894

关键词

Feature extraction; Cognition; Three-dimensional displays; Symbiosis; Task analysis; Two dimensional displays; Solid modeling; Egocentric video analysis; action recognition; deep learning; symbiotic attention

向作者/读者索取更多资源

In this paper, a framework called SAOA is proposed to tackle egocentric action recognition by suppressing background distractors and enhancing action-relevant interactions. The framework introduces two extra sources of information, spatial location and discriminative features of candidate objects, to enable concentration on the occurring interactions. It includes an object-centric feature alignment method and a symbiotic attention mechanism to provide meticulous reasoning between the actor and the environment, achieving state-of-the-art performance on the largest egocentric video dataset.
In this paper, we propose to tackle egocentric action recognition by suppressing background distractors and enhancing action-relevant interactions. The existing approaches usually utilize two independent branches to recognize egocentric actions, i.e., a verb branch and a noun branch. However, the mechanism to suppress distracting objects and exploit local human-object correlations is missing. To this end, we introduce two extra sources of information, i.e., the candidate objects spatial location and their discriminative features, to enable concentration on the occurring interactions. We design a Symbiotic Attention with Object-centric feature Alignment framework (SAOA) to provide meticulous reasoning between the actor and the environment. First, we introduce an object-centric feature alignment method to inject the local object features to the verb branch and noun branch. Second, we propose a symbiotic attention mechanism to encourage the mutual interaction between the two branches and select the most action-relevant candidates for classification. The framework benefits from the communication among the verb branch, the noun branch, and the local object information. Experiments based on different backbones and modalities demonstrate the effectiveness of our method. Notably, our framework achieves the state-of-the-art on the largest egocentric video dataset.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据