期刊
SENSORS
卷 23, 期 9, 页码 -出版社
MDPI
DOI: 10.3390/s23094528
关键词
deep learning; photorealistic; style transfer; deep layer aggregation
This paper presents a deep learning method that extends the PhotoNet network architecture for photorealistic universal style transfer. The method improves the fusion of content and style information by adding extra feature-aggregation modules and deeper aggregation across decoding layers. The proposed deep layer aggregation architectures enhance the stylization and quality of the output image.
This paper introduces a deep learning approach to photorealistic universal style transfer that extends the PhotoNet network architecture by adding extra feature-aggregation modules. Given a pair of images representing the content and the reference of style, we augment the state-of-the-art solution mentioned above with deeper aggregation, to better fuse content and style information across the decoding layers. As opposed to the more flexible implementation of PhotoNet (i.e., PhotoNAS), which targets the minimization of inference time, our method aims to achieve better image reconstruction and a more pleasant stylization. We propose several deep layer aggregation architectures to be used as wrappers over PhotoNet, to enhance the stylization and quality of the output image.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据