3.8 Proceedings Paper

Multi-Modal Reasoning Graph for Scene-Text Based Fine-Grained Image Classification and Retrieval

向作者/读者索取更多资源

By leveraging multi-modal content in the form of visual and textual cues, this study significantly improved the performance of fine-grained image classification and retrieval tasks. The model obtained relationship-enhanced features by learning a common semantic space between salient objects and text found in an image, outperforming previous state-of-the-art in two different tasks.
Scene text instances found in natural images carry explicit semantic information that can provide important cues to solve a wide array of computer vision problems. In this paper, we focus on leveraging multi-modal content in the form of visual and textual cues to tackle the task of fine-grained image classification and retrieval. First, we obtain the text instances from images by employing a text reading system. Then, we combine textual features with salient image regions to exploit the complementary information carried by the two sources. Specifically, we employ a Graph Convolutional Network to perform multi-modal reasoning and obtain relationship-enhanced features by learning a common semantic space between salient objects and text found in an image. By obtaining an enhanced set of visual and textual features, the proposed model greatly outperforms previous state-of-the-art in two different tasks, fine-grained classification and image retrieval in the ConText[23] and Drink Bottle[4] datasets.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据