期刊
IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 32, 期 -, 页码 5595-5609出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2023.3321515
关键词
Deep learning; disentanglement representation; image deblurring; image blurring; scale-recurrent
The article introduces a new framework for motion deblurring, which includes a Blur Space Disentangled Network (BSDNet) and a Hierarchical Scale-recurrent Deblurring Network (HSDNet). It addresses the challenges of synthetic datasets and real-world blur, as well as providing the best performance by breaking down the non-uniform deblurring task.
Deep learning (DL) based methods for motion deblurring, taking advantage of large-scale datasets and sophisticated network structures, have reported promising results. However, two challenges still remain: existing methods usually perform well on synthetic datasets but cannot deal with complex real-world blur, and in addition, over- and under-estimation of the blur will result in restored images that remain blurred and even introduce unwanted distortion. We propose a motion deblurring framework that includes a Blur Space Disentangled Network (BSDNet) and a Hierarchical Scale-recurrent Deblurring Network (HSDNet) to address these issues. Specifically, we train an image blurring model to facilitate learning a better image deblurring model. Firstly, BSDNet learns how to separate the blur features from blurry images, which is adaptable for blur transferring, dataset augmentation, and ultimately directing the deblurring model. Secondly, to gradually recover sharp information in a coarse-to-fine manner, HSDNet makes full use of the blur features acquired by BSDNet as a priori and breaks down the non-uniform deblurring task into various subtasks. Moreover, the motion blur dataset created by BSDNet also bridges the gap between training images and actual blur. Extensive experiments on real-world blur datasets demonstrate that our method works effectively on complex scenarios, resulting in the best performance that significantly outperforms many state-of-the-art approaches.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据