4.6 Article

Scenario-Aware Recurrent Transformer for Goal-Directed Video Captioning

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3503927

关键词

Transformer; video captioning; scenario-aware; long-time dependency

资金

  1. National Natural Science Foundation of China [61832001]
  2. Open Fund of Intelligent Terminal Key Laboratory of Sichuan Province [SCITLAB-1016]
  3. Zhejiang Lab's International Talent Fund for Young Professionals

向作者/读者索取更多资源

This paper proposes a novel video captioning method that fully mines visual cues and considers scenario information, resulting in goal-directed and narrative coherent video descriptions.
Fully mining visual cues to aid in content understanding is crucial for video captioning. However, most stateof-the-art video captioning methods are limited to generating captions purely based on straightforward information while ignoring the scenario and context information. To fill the gap, we propose a novel, simple but effective scenario-aware recurrent transformer (SART) model to execute video captioning. Our model contains a scenario understanding module to obtain a global perspective across multiple frames, providing a specific scenario to guarantee a goal-directed description. Moreover, for the sake of achieving narrative continuity in the generated paragraph, a unified recurrent transformer is adopted. To demonstrate the effectiveness of our proposed SART, we have conducted comprehensive experiments on various large-scale video description datasets, including ActivityNet, YouCookII, and VideoStory. Additionally, we extend a story-oriented evaluation framework for assessing the quality of the generated caption more precisely. The superior performance has shown that SART has a strong ability to generate correct, deliberative, and narrative coherent video descriptions.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据