Journal
IEEE TRANSACTIONS ON CYBERNETICS
Volume 51, Issue 2, Pages 913-926Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2019.2914351
Keywords
Deep learning; image captioning; multimodal; visual question answering (VQA)
Categories
Funding
- National Natural Science Foundation of China [61772510]
- Young Top-Notch Talent Program of Chinese Academy of Sciences [QYZDB-SSWJSC015]
Ask authors/readers for more resources
This paper proposes a method to reduce the semantic gap between vision and language by utilizing text-guided and semantic-guided attention, highlighting relevant regions and concepts through two-level attention networks, and achieving excellent experimental results in image captioning and visual question answering tasks.
Vision-to-language tasks aim to integrate computer vision and natural language processing together, which has attracted the attention of many researchers. For typical approaches, they encode image into feature representations and decode it into natural language sentences. While they neglect high-level semantic concepts and subtle relationships between image regions and natural language elements. To make full use of these information, this paper attempt to exploit the text-guided attention and semantic-guided attention (SA) to find the more correlated spatial information and reduce the semantic gap between vision and language. Our method includes two-level attention networks. One is the text-guided attention network which is used to select the text-related regions. The other is SA network which is used to highlight the concept-related regions and the region-related concepts. At last, all these information are incorporated to generate captions or answers. Practically, image captioning and visual question answering experiments have been carried out, and the experimental results have shown the excellent performance of the proposed approach.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available