4.6 Article

Layer-wise enhanced transformer with multi-modal fusion for image caption

期刊

MULTIMEDIA SYSTEMS
卷 29, 期 3, 页码 1043-1056

出版社

SPRINGER
DOI: 10.1007/s00530-022-01036-z

关键词

Image captioning; Multi-modal feature fusion; Transformer

向作者/读者索取更多资源

This paper proposes Gated Adaptive Controller Attention (GACA) method, which explores the complementarity of text features with region and grid features through attentional operations and adaptively fuses the two visual features using a gating mechanism to obtain comprehensive image representation. During decoding, a Layer-wise Enhanced Cross-Attention (LECA) module is designed to calculate cross-attention between the generated word embedded vectors and multi-level visual information in the encoder, resulting in enhanced visual features. Extensive experiments demonstrate that our proposed model achieves new state-of-the-art performance on the MS COCO dataset.
Image caption can automatically generate a descriptive sentence according to the image. Transformer-based architectures show significant performance in image captioning, in which object-level visual features are encoded to generate vector representations, and they are fed into the decoder to generate descriptions. However, the existing methods mainly focus on the object-level regions and ignore the no-target area of the image, which will affect the context of visual information. In addition, the decoder fails to efficiently exploit the visual information transmitted by the encoder in the language generation steps. In this paper, we propose Gated Adaptive Controller Attention (GACA), which separately explores the complementarity of text features with the region and grid features in attentional operations, and then uses a gating mechanism to adaptively fuse the two visual features to obtain comprehensive image representation. During decoding, we design a Layer-wise Enhanced Cross-Attention (LECA) module, the enhanced visual features are obtained by cross-attention calculation between the generated word embedded vectors and multi-level visual information in the encoder. Through an extensive set of experiments, we demonstrate that our proposed model achieves new state-of-the-art performance on the MS COCO dataset.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据