4.6 Article

Global context guided hierarchically residual feature refinement network for defocus blur detection

Journal

SIGNAL PROCESSING
Volume 183, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.sigpro.2021.107996

Keywords

Defocus blur detection; Feature aggregation; Global context information; Feature fusion

Funding

  1. National Natural Science Foundation of China [62076228, 61701451]
  2. Natural Natural Science Foundation of Hubei Province [2020CFB644]
  3. Key Laboratory of Information Perception and Systems for Public Security of MIIT (Nanjing University of Science and Technology) [202007]

Ask authors/readers for more resources

This study introduces a global context guided hierarchically residual feature refinement network for defocus blur detection, which improves the final detection performance by aggregating different feature information and utilizing methods such as multi-scale dilation convolution. Extensive experiments validate the effectiveness of the proposed network compared to other state-of-the-art methods in terms of both efficiency and accuracy.
As an important pre-processing step, defocus blur detection makes critical role in various computer vision tasks. However, previous methods cannot obtain satisfactory results due to the complex image background clutter, scale sensitivity and miss of region boundary details. In this paper, for addressing these issues, we introduce a global context guided hierarchically residual feature refinement network (HRFRNet) for defocus blur detection from a natural image. In our network, the low-level fine detail features, highlevel semantic and global context information are aggregated in a hierarchical manner to boost the final detection performance. In order to reduce the affect of complex background clutter and smooth regions without enough textures on the final results, we design a multi-scale dilation convolution based global context pooling module to capture the global context information from the most deep feature layer of the backbone feature extraction network. Then, a global context guiding module is introduced to add the global context information into different feature refining stages for guiding the feature refining process. In addition, by considering that the defocus blur is sensitive to image scales, we add a deep features guided fusion module to integrate the outputs of different stages for generating the final score map. Extensive experiments with ablation studies on two commonly used datasets are carried out to validate the superiority of our proposed network when compared with other 11 state-of-the-art methods in terms of both efficiency and accuracy. (C) 2021 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available