4.7 Article

Adaptive Group-Wise Consistency Network for Co-Saliency Detection

期刊

IEEE TRANSACTIONS ON MULTIMEDIA
卷 25, 期 -, 页码 764-776

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMM.2021.3138246

关键词

Feature extraction; Adaptation models; Decoding; Semantics; Global communication; Aggregates; Prediction algorithms; Co-saliency detection; content-adaptive layer; group consistency; intra-saliency priors; semantic information

向作者/读者索取更多资源

In this paper, we propose a novel Adaptive Group-wise Consistency Network (AGCNet) that can adaptively adjust to image groups with random quantities to improve the performance of co-saliency detection. By introducing intra-saliency priors and an Adaptive Group-wise Consistency module, as well as specially designed decoders, our AGCNet achieves competitive performance compared to state-of-the-art models on four benchmark datasets.
Co-saliency detection focuses on detecting common and salient objects among a group of images. With the application of deep learning in co-saliency detection, more accurate and more effective models are proposed in an end-to-end manner. However, two major drawbacks in these models hinder the further performance improvement of co-saliency detection: 1) the static manner-based inference, and 2) the constant quantity of input images. To address these limitations, we present a novel Adaptive Group-wise Consistency Network (AGCNet) with the ability of content-adaptive adjustment for a given image group with random quantity of images. In AGCNet, we first introduce intra-saliency priors generated from any off-the-shelf salient object detection model. Then, an Adaptive Group-wise Consistency (AGC) module is proposed to capture group consistency for each individual image, and is applied on three-scale features to capture the group consistency from different perspectives. This module is composed of two key components, where the content-adaptive group consistency block breaks the above limitations to adaptively capture the global group consistency with the assistance of intra-saliency priors and the ranking-based fusion block combines the consistency with individual attributes of each image feature to generate discriminative group consistency feature for each image. Following AGC modules, a specially designed Aggregated Decoder aggregates the three-scale group consistency features to adapt to co-salient objects with diverse scales for preliminary detection. Finally, we incorporate two normal decoders to progressively refine the preliminary detection and generate the final co-saliency maps. Extensive experiments on four benchmark datasets demonstrate that our AGCNet achieves competitive performance as compared with 19 state-of-the-art models, and the proposed modules experimentally show substantial practical merits.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据