期刊
NEUROCOMPUTING
卷 423, 期 -, 页码 590-600出版社
ELSEVIER
DOI: 10.1016/j.neucom.2020.10.065
关键词
Damaged photographs restoration; Deep learning; GAN
资金
- National Natural Science Foundation of China [61471120]
- Natural Science Foundation of Hunan Province [2020JJ4745]
This article focuses on studying an efficient deep learning architecture for restoring damaged character photographs, proposing a new generative adversarial network (GAN) architecture to achieve this task. By utilizing residual U-Net (ResU-Net) GAN and ResU-Net conditional GAN, along with a weighted multi-features loss function, the approach is able to restore spots, creases, cracks, and other spoiled manners in damaged character photographs.
Recently, deep learning has been applied to many image restoration tasks. In this work, we focus on studying an efficient deep learning architecture to restore damaged character photographs (DCPs) which are spoiled by natural or human factors including creases, spots, cracks, light, etc. A large amount of work has focused on image restoration such as super-resolution, image inpainting, image deblurring, and image denoising. However, few studies focus on restoring DCPs based on deep learning since DCPs are varied and complex, along with the difficulty of getting paired training dataset. In this work, we propose a new generative adversarial network (GAN) architecture to restore DCPs. Specifically, a residual U-Net (ResU-Net) GAN (RUGAN) is firstly constructed to generate fake DCPs by employing real DCPs, clear character photographs (CCPs), and dirty masks. Then, a ResU-Net conditional GAN (RUCGAN) is built to restore DCPs by exploiting paired CCPs and fake DCPs. To further improve the quality of restored character photographs, a weighted multi-features loss function is adopted in RUCGAN. Finally, numerical results show that our approach can restore spots, creases, cracks, and other spoiled manners in DCPs. (c) 2020 Elsevier B.V. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据