Journal
COMPUTERS & ELECTRICAL ENGINEERING
Volume 85, Issue -, Pages -Publisher
PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.compeleceng.2020.106655
Keywords
Deep convolutional neural networks; Portrait; Style transfer; Facial segmentation
Categories
Funding
- National Natural Science Foundation of China [61733004, 61503128, 61602402, 61573133]
- Science and Technology Plan Project of Hunan Province [2016TP1020]
- Scientific Research Fund of Hunan Provincial Education Department [18A333]
- Hengyang guided science and technology projects and Application-oriented Special Disciplines [Hengkefa [2018]60-31]
- Double First-Class University Project of Hunan Province [Xiangjiaotong [2018]469]
- Scientific and Technological Development Project of Hengyang City [60-31]
- Postgraduate Scientific Research Innovation Project of Hunan Province [CX20190998]
- Degree & Postgraduate Education Reform Project of Hunan Province [2019JGYB266]
- Postgraduate Teaching Platform Project of Hunan Province [Xiangjiaotong [2019]370-321]
Ask authors/readers for more resources
When standard neural style transfer approaches are used in portrait style transfer, they often inappropriately apply textures and colours in different regions of the style portraits to the content portraits, leading to unsatisfied transfer results. This paper presents a portrait style transfer method to transfer the style of one image to another. It first proposes a combined segmentation method for the portrait parts, which segments both the style portrait and the content portrait into masks of seven parts automatically, including background, face, eyes, nose, eyebrows, mouth and foreground. These masks are extracted to capture elements of the styles for objects in the style image and to preserve the structure in the content portrait. This paper then proposes an augmented deep Convolutional Neural Network (CNN) framework for portrait style transfer. The masks of seven parts are added into a trained deep convolutional neural network as feature maps in certain selected layers in the augmented deep CNN model. An improved loss function is proposed for the training of the portrait style transfer. Results on various images show that our method outperforms the state-of-the-art style transfer techniques. (C) 2020 Elsevier Ltd. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available