4.7 Article

Plug-and-Play Regulators for Image-Text Matching

期刊

IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 32, 期 -, 页码 2322-2334

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2023.3266887

关键词

Image-text matching; recurrent correspondence regulator; recurrent aggregation regulator; cross-modal attention; similarity aggregation; plug-and-play operation

向作者/读者索取更多资源

Researchers have developed two efficient regulators, namely the Recurrent Correspondence Regulator (RCR) and the Recurrent Aggregation Regulator (RAR), which enhance the flexibility of correspondence and emphasize important alignments in image-text matching. These regulators can be easily incorporated into existing frameworks and have shown significant improvements in various models.
Exploiting fine-grained correspondence and visual-semantic alignments has shown great potential in image-text matching. Generally, recent approaches first employ a cross-modal attention unit to capture latent region-word interactions, and then integrate all the alignments to obtain the final similarity. However, most of them adopt one-time forward association or aggregation strategies with complex architectures or additional information, while ignoring the regulation ability of network feedback. In this paper, we develop two simple but quite effective regulators which efficiently encode the message output to automatically contextualize and aggregate cross-modal representations. Specifically, we propose (i) a Recurrent Correspondence Regulator (RCR) which facilitates the cross-modal attention unit progressively with adaptive attention factors to capture more flexible correspondence, and (ii) a Recurrent Aggregation Regulator (RAR) which adjusts the aggregation weights repeatedly to increasingly emphasize important alignments and dilute unimportant ones. Besides, it is interesting that RCR and RAR are plug-and-play: both of them can be incorporated into many frameworks based on cross-modal interaction to obtain significant benefits, and their cooperation achieves further improvements. Extensive experiments on MSCOCO and Flickr30K datasets validate that they can bring an impressive and consistent R@1 gain on multiple models, confirming the general effectiveness and generalization ability of the proposed methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据