4.7 Article

Uncovering the Temporal Context for Video Question Answering

Journal

INTERNATIONAL JOURNAL OF COMPUTER VISION
Volume 124, Issue 3, Pages 409-421

Publisher

SPRINGER
DOI: 10.1007/s11263-017-1033-7

Keywords

Video sequence modeling; Video question answering; Video prediction; Cross-media

Funding

  1. Data to Decisions Cooperative Research Centre
  2. Google Faculty Award
  3. Australian Government Research Training Program Scholarship
  4. NVIDIA Corporation

Ask authors/readers for more resources

In this work, we introduce Video Question Answering in the temporal domain to infer the past, describe the present and predict the future. We present an encoder-decoder approach using Recurrent Neural Networks to learn the temporal structures of videos and introduce a dual-channel ranking loss to answer multiple-choice questions. We explore approaches for finer understanding of video content using the question form of fill-in-the-blank, and collect our Video Context QA dataset consisting of 109,895 video clips with a total duration of more than 1000 h from existing TACoS, MPII-MD and MEDTest 14 datasets. In addition, 390,744 corresponding questions are generated from annotations. Extensive experiments demonstrate that our approach significantly outperforms the compared baselines.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available