4.7 Article

Perceptual loss guided Generative adversarial network for saliency detection

期刊

INFORMATION SCIENCES
卷 654, 期 -, 页码 -

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2023.119625

关键词

Saliency detection; Deep learning; Perceptual loss; Generative Adversarial Network

向作者/读者索取更多资源

This paper introduces a novel approach for saliency detection using a generative adversarial network guided by perceptual loss. The proposed method utilizes shape information to shape the perceptual saliency cues and demonstrates competitive performance on six benchmark datasets, regardless of color images or grayscale images.
In this work, we introduce a novel approach for saliency detection through the utilization of a generative adversarial network guided by perceptual loss. Achieving effective saliency detection through deep learning entails intricate challenges influenced by a multitude of factors, with the choice of loss function playing a pivotal role. Previous studies usually formulate loss functions based on pixel-level distances between predicted and ground-truth saliency maps. However, these formulations don't explicitly exploit the perceptual attributes of objects, such as their shapes and textures, which serve as critical indicators of saliency. To tackle this deficiency, we propose an innovative loss function that capitalizes on perceptual features derived from the saliency map. Our approach has been rigorously evaluated on six benchmark datasets, demonstrating competitive performance when compared against the forefront methods in terms of both Mean Absolute Error (MAE) and F-measure. Remarkably, our experiments reveal consistent outcomes when assessing the perceptual loss using either grayscale saliency maps or saliency-masked colour images. This observation underscores the significance of shape information in shaping the perceptual saliency cues.The code is available at https://github.com/XiaoxuCai/PerGAN.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据