Journal
ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS
Volume 19, Issue 2, Pages -Publisher
ASSOC COMPUTING MACHINERY
DOI: 10.1145/3545609
Keywords
Camouflaged object detection; frequency learning
Ask authors/readers for more resources
This article proposes a frequency-based method called FBNet for camouflaged object detection. The method suppresses confusing high-frequency texture information to separate camouflaged objects from the background. It also includes frequency-aware context aggregation and adaptive frequency attention modules, as well as a gradient-weighted loss function to focus on contour details. Experimental results demonstrate that FBNet outperforms state-of-the-art methods in camouflaged object detection.
Camouflaged object detection (COD) is important as it has various potential applications. Unlike salient object detection (SOD), which tries to identify visually salient objects, COD tries to detect objects that are visually very similar to the surrounding background. We observe that recent COD methods try to fuse features from different levels using some context aggregation strategies originally developed for SOD. Such an approach, however, may not be appropriate for COD as these existing context aggregation strategies are good at detecting distinctive objects while weakening the features from less discriminative objects. To address this problem, we propose in this article to exploit frequency learning to suppress the confusing high-frequency texture information, to help separate camouflaged objects from their surrounding background, and a frequency-based method, called FBNet, for camouflaged object detection. Specifically, we design a frequency-aware context aggregation (FACA) module to suppress high-frequency information and aggregate multi-scale features from a frequency perspective, an adaptive frequency attention (AFA) module to enhance the features of the learned important frequency components, and a gradient-weighted loss function to guide the proposed method to pay more attention to contour details. Experimental results show that our model outperforms relevant state-of-the-art methods.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available