4.7 Article

VQA: Visual Question Answering

Journal

INTERNATIONAL JOURNAL OF COMPUTER VISION
Volume 123, Issue 1, Pages 4-31

Publisher

SPRINGER
DOI: 10.1007/s11263-016-0966-6

Keywords

Visual Question Answering

Funding

  1. Paul G. Allen Family Foundation
  2. National Science Foundation CAREER award
  3. Army Research Office YIP Award
  4. Office of Naval Research grant
  5. ICTAS at Virginia Tech
  6. Google Faculty Research Awards
  7. Direct For Computer & Info Scie & Enginr
  8. Div Of Information & Intelligent Systems [1661374] Funding Source: National Science Foundation

Ask authors/readers for more resources

We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing similar to 0.25 M images, similar to 0.76 M questions, and similar to 10 M answers (www.visuaiqa.org) and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (http://cloudcv.org/vqa).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available