4.7 Article

Depth-Distilled Multi-Focus Image Fusion

Journal

IEEE TRANSACTIONS ON MULTIMEDIA
Volume 25, Issue -, Pages 966-978

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMM.2021.3134565

Keywords

Image fusion; Feature extraction; Lenses; Task analysis; Testing; Cameras; Adaptation models; Depth distillation; multi-focus image fusion; multi-level decision map fusion

Ask authors/readers for more resources

A new depth-distilled multi-focus image fusion framework (D2MFIF) is proposed in this paper, which gradually improves the performance of multi-focus image fusion by adaptively transferring depth knowledge and integrating multi-level decision maps.
Homogeneous regions, which are smooth areas that lack blur clues to discriminate if they are focused or non-focused. Therefore, they bring a great challenge to achieve high accurate multi-focus image fusion (MFIF). Fortunately, we observe that depth maps are highly related to focus and defocus, containing a preponderance of discriminative power to locate homogeneous regions. This offers the potential to provide additional depth cues to assist MFIF task. Taking depth cues into consideration, in this paper, we propose a new depth-distilled multi-focus image fusion framework, namely D2MFIF. In D2MFIF, depth-distilled model (DDM) is designed for adaptively transferring the depth knowledge into MFIF task, gradually improving MFIF performance. Moreover, multi-level fusion mechanism is designed to integrate multi-level decision maps from intermediate outputs for improving the final prediction. Visually and quantitatively experimental results demonstrate the superiority of our method over several state-of-the-art methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available