3.8 Proceedings Paper

Maintaining Reasoning Consistency in Compositional Visual Question Answering

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.00504

关键词

-

资金

  1. Natural Science Foundation of China (NSFC) [62172041, 62176021]

向作者/读者索取更多资源

This paper presents a dialog-like reasoning method to maintain reasoning consistency in answering a compositional question and its sub-questions. By integrating the reasoning processes for the sub-questions into the reasoning process for the compositional question like a dialog task, and using a consistency constraint to penalize inconsistent answer predictions, the effectiveness of the method is demonstrated through experimental results.
A compositional question refers to a question that contains multiple visual concepts (e.g., objects, attributes, and relationships) and requires compositional reasoning to answer. Existing VQA models can answer a compositional question well, but cannot work well in terms of reasoning consistency in answering the compositional question and its sub-questions. For example, a compositional question for an image is: Are there any elephants to the right of the white bird? and one of its sub-questions is Is any bird visible in the scene?. The models may answer yes to the compositional question, but no to the sub-question. This paper presents a dialog-like reasoning method for maintaining reasoning consistency in answering a compositional question and its sub-questions. Our method integrates the reasoning processes for the sub-questions into the reasoning process for the compositional question like a dialog task, and uses a consistency constraint to penalize inconsistent answer predictions. In order to enable quantitative evaluation of reasoning consistency, we construct a GQASub dataset based on the well-organized GQA dataset. Experimental results on the GQA dataset and the GQA-Sub dataset demonstrate the effectiveness of our method.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据