4.5 Article

Edge Distraction-aware Salient Object Detection

期刊

IEEE MULTIMEDIA
卷 30, 期 3, 页码 63-73

出版社

IEEE COMPUTER SOC
DOI: 10.1109/MMUL.2023.3235936

关键词

Feature extraction; Image edge detection; Object detection; Visualization; Filling; Task analysis; Convolution

向作者/读者索取更多资源

In this study, we propose a new method to generate distraction-free edge features by incorporating holistic interdependencies between high-level features. Experimental results demonstrate that our method outperforms the state-of-the-art methods on benchmark datasets, with fast inference speed on a single GPU.
Integrating low-level edge features has been proven to be effective in preserving clear boundaries of salient objects. However, the locality of edge features makes it difficult to capture globally salient edges, leading to distraction in the final predictions. To address this problem, we propose to produce distraction-free edge features by incorporating cross-scale holistic interdependencies between high-level features. In particular, we first formulate our edge features extraction process as a boundary-filling problem. In this way, we enforce edge features to focus on closed boundaries instead of those disconnected background edges. Second, we propose to explore cross-scale holistic contextual connections between every position pair of high-level feature maps regardless of their distances across different scales. It selectively aggregates features at each position based on its connections to all the others, simulating the contrast stimulus of visual saliency. Finally, we present a complementary features integration module to fuse low- and high-level features according to their properties. Experimental results demonstrate our proposed method outperforms previous state-of-the-art methods on the benchmark datasets, with the fast inference speed of 30 FPS on a single GPU.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据