3.8 Proceedings Paper

XYDeblur: Divide and Conquer for Single Image Deblurring

出版社

IEEE COMPUTER SOC
DOI: 10.1109/CVPR52688.2022.01690

关键词

-

资金

  1. Institute of Information & communications Technology Promotion (IITP) - Korea government (MSIT) [2014-3-00077]
  2. National Research Foundation of Korea (NRF) - Korea government (MSIT) [2020R1A4A4079705]

向作者/读者索取更多资源

In this paper, a network architecture with one encoder and two decoders is proposed to tackle the problem of single image deblurring. By observing that multiple decoders successfully decompose encoded feature information into directional components and improving network efficiency and deblurring performance through rotating and sharing blur kernels used in decoders, the proposed network outperforms U-Net while preserving network parameters.
Many convolutional neural networks (CNNs) for single image deblurring employ a U-Net structure to estimate latent sharp images. Having long been proven to be effective in image restoration tasks, a single lane of encoder-decoder architecture overlooks the characteristic of deblurring, where a blurry image is generated from complicated blur kernels caused by tangled motions. Toward an effective network architecture for single image deblurring, we present complemental sub-solution learning with a one-encoder-two-decoder architecture. Observing that multiple decoders successfully learn to decompose encoded feature information into directional components, we further improve both the network efficiency and the deblurring performance by rotating and sharing kernels exploited in the decoders, which prevents the decoders from separating unnecessary components such as color shift. As a result, our proposed network shows superior results compared to U-Net while preserving the network parameters, and using the proposed network as the base network can improve the performance of existing state-of-the-art deblurring networks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据