4.7 Article

RUArt: A Novel Text-Centered Solution for Text-Based Visual Question Answering

期刊

IEEE TRANSACTIONS ON MULTIMEDIA
卷 25, 期 -, 页码 1-12

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMM.2021.3120194

关键词

Attention mechanism; computer vision; machine reading comprehension; natural language processing; visual question answering

向作者/读者索取更多资源

This paper proposes a novel method called RUArt for text-based visual question answering. It reads the image, understands the question, OCRed text, and objects, and mines the relationships among them. Experimental results show that RUArt effectively explores contextual information and stable relationships between text and objects.
Text-based visual question answering (VQA) requires to read and understand text in an image to correctly answer a given question. However, most current methods simply add optical character recognition (OCR) tokens extracted from the image into the VQA model without considering contextual information of OCR tokens and mining the relationships between OCR tokens and scene objects. In this paper, we propose a novel text-centered method called RUArt (Reading, Understanding and Answering the Related Text) for text-based VQA. Taking an image and a question as input, RUArt first reads the image and obtains text and scene objects. Then, it understands the question, OCRed text and objects in the context of the scene, and further mines the relationships among them. Finally, it answers the related text for the given question through text semantic matching and reasoning. We evaluate our RUArt on two text-based VQA benchmarks (ST-VQA and TextVQA) and conduct extensive ablation studies for exploring the reasons behind RUArt's effectiveness. Experimental results demonstrate that our method can effectively explore the contextual information of the text and mine the stable relationships between the text and objects.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据