4.6 Article

Vision-to-Language Tasks Based on Attributes and Attention Mechanism

期刊

IEEE TRANSACTIONS ON CYBERNETICS
卷 51, 期 2, 页码 913-926

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2019.2914351

关键词

Deep learning; image captioning; multimodal; visual question answering (VQA)

资金

  1. National Natural Science Foundation of China [61772510]
  2. Young Top-Notch Talent Program of Chinese Academy of Sciences [QYZDB-SSWJSC015]

向作者/读者索取更多资源

This paper proposes a method to reduce the semantic gap between vision and language by utilizing text-guided and semantic-guided attention, highlighting relevant regions and concepts through two-level attention networks, and achieving excellent experimental results in image captioning and visual question answering tasks.
Vision-to-language tasks aim to integrate computer vision and natural language processing together, which has attracted the attention of many researchers. For typical approaches, they encode image into feature representations and decode it into natural language sentences. While they neglect high-level semantic concepts and subtle relationships between image regions and natural language elements. To make full use of these information, this paper attempt to exploit the text-guided attention and semantic-guided attention (SA) to find the more correlated spatial information and reduce the semantic gap between vision and language. Our method includes two-level attention networks. One is the text-guided attention network which is used to select the text-related regions. The other is SA network which is used to highlight the concept-related regions and the region-related concepts. At last, all these information are incorporated to generate captions or answers. Practically, image captioning and visual question answering experiments have been carried out, and the experimental results have shown the excellent performance of the proposed approach.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据