3.8 Proceedings Paper

Scene Text Visual Question Answering

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/ICCV.2019.00439

Keywords

-

Funding

  1. aB-SINTHE (Fundacion BBVA 2017)
  2. CERCA Programme / Generalitat de Catalunya
  3. European Social Fund [CCI: 2014ES05SFOP007]
  4. NVIDIA Corporation
  5. AGAUR [2019-FIB01233]
  6. UAB
  7. [TIN2017-89779-P]
  8. [712949]

Ask authors/readers for more resources

Current visual question answering datasets do not consider the rich semantic information conveyed by text within an image. In this work, we present a new dataset, ST-VQA, that aims to highlight the importance of exploiting high-level semantic information present in images as textual cues in the Visual Question Answering process. We use this dataset to define a series of tasks of increasing difficulty for which reading the scene text in the context provided by the visual information is necessary to reason and generate an appropriate answer. We propose a new evaluation metric for these tasks to account both for reasoning errors as well as shortcomings of the text recognition module. In addition we put forward a series of baseline methods, which provide further insight to the newly released dataset, and set the scene for further research.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available