4.6 Article

Three-Stream Cross-Modal Feature Aggregation Network for Light Field Salient Object Detection

Journal

IEEE SIGNAL PROCESSING LETTERS
Volume 28, Issue -, Pages 46-50

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LSP.2020.3044544

Keywords

Depth map; feature aggregation; light field; saliency; salient object detection

Funding

  1. 2019 Startup Research Project - Guizhou Normal University for Youth Doctors [GZNUD[2018]32]

Ask authors/readers for more resources

A three-stream cross-modal feature aggregation network is proposed for 4D light field saliency detection, which analyzes different visual features of light field images to identify salient objects, showing effectiveness and superiority compared to state-of-the-art methods in experiments.
Light field saliency detection can leverage the rich visual features of light field(LF) to highlight the salient regions, but existing CNN-based saliency detection methods are specifically designed for RGB image, not for light field. To tackle this problem, a three-stream cross-modal feature aggregation network is proposed for 4D light field saliency detection. To fully utilize the rich visual features of light field, three sub-networks are set up to analyse focal stack, all-focus image, and depth map respectively. Then, feature aggregation modules are used to aggregate cross-level features in a top-down manner. Finally, a cross-modal feature fusion module is designed to fuse the aggregated features of various modalities from the three sub-networks, which can identify salient object quickly and precisely. Extensive experiments on three benchmark datasets show that the effectiveness and superiority of the proposed algorithm qualitatively and quantitatively on five evaluation metrics compared with state-of-the-art(SOTA) methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available