4.7 Article

Integrating Part of Speech Guidance for Image Captioning

期刊

IEEE TRANSACTIONS ON MULTIMEDIA
卷 23, 期 -, 页码 92-104

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMM.2020.2976552

关键词

Visualization; Predictive models; Semantics; Feature extraction; Task analysis; Computer vision; Speech processing; Part of speech; image captioning; multi-task learning

资金

  1. National Key Research and Development Plan [2016YFB1001004]
  2. Guangdong Science and Technology Project [2017B010123003]
  3. National Natural Science Foundation of China [61772161, 61906143]

向作者/读者索取更多资源

The paper proposes an integrated image captioning method that incorporates part of speech information, using a part of speech prediction network within an encoder-decoder framework, and multi-task learning to generate captions with more accurate visual information and better compliance with language habits and grammar rules.
To generate an image caption, firstly, the content of the image should be fully understood; and then the semantic information contained in the image should be described using a phrase or statement that conforms to certain grammatical rules. Thus, it requires techniques from both computer vision and natural language processing to connect the two different media forms together, which is highly challenging. To adaptively adjust the effect of visual information and language information on the captioning process, in this paper, the part of speech information is proposed to novelly integrate with image captioning models based on the encoder-decoder framework. First, a part of speech prediction network is proposed to analyze and model the part of speech sequences for the words in natural language sentences; then, different mechanisms are proposed to integrate the part of speech guidance information with merge-based and inject-based image captioning models, respectively; finally, according to the integrated frameworks, a multi-task learning paradigm is proposed to facilitate model training. Experiments are conducted on two widely used image captioning datasets, Flickr30 k and COCO, and the results have validated that the image captions generated by the proposed method contain more accurate visual information and comply with language habits and grammar rules better.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据