4.7 Article

Video abstraction based on fMRI-driven visual attention model

Journal

INFORMATION SCIENCES
Volume 281, Issue -, Pages 781-796

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2013.12.039

Keywords

Video abstraction; Visual attention; Functional magnetic resonance imaging; Propensity for synchronization; Bayesian surprise model

Funding

  1. NIH [EB 006878, R01 R01DA033393]
  2. NSFC [61005018, 91120005, 61103061, 61333017]
  3. [NPU-FFR-JC20120237]
  4. [NCET-10-0079]

Ask authors/readers for more resources

The explosive growth of digital video data renders a profound challenge to succinct, informative, and human-centric representations of video contents. This quickly-evolving research topic is typically called 'video abstraction'. We are motivated by the facts that the human brain is the end-evaluator of multimedia content and that the brain's responses can quantitatively reveal its attentional engagement in the comprehension of video. We propose a novel video abstraction paradigm which leverages functional magnetic resonance imaging (FMRI) to monitor and quantify the brain's responses to video stimuli. These responses are used to guide the extraction of visually informative segments from videos. Specifically, most relevant brain regions involved in video perception and cognition are identified to form brain networks. Then, the propensity for synchronization (PFS) derived from spectral graph theory is utilized over the brain networks to yield the benchmark attention curves based on the fMRI-measured brain responses to a number of training video streams. These benchmark attention curves are applied to guide and optimize the combinations of a variety of low-level visual features created by the Bayesian surprise model. In particular, in the training stage, the optimization objective is to ensure that the learned attentional model correlates well with the brain's responses and reflects the attention that viewers pay to video contents. In the application stage, the attention curves predicted by the learned and optimized attentional model serve as an effective benchmark to abstract testing videos. Evaluations on a set of video sequences from the TRECVID database demonstrate the effectiveness of the proposed framework. (C) 2014 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available