Journal
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021
Volume -, Issue -, Pages 124-133Publisher
IEEE COMPUTER SOC
DOI: 10.1109/CVPR46437.2021.00019
Keywords
-
Funding
- NSF [IIS-1924937, IIS-2041009]
Ask authors/readers for more resources
Research shows that correlation between features from a pre-trained VGG network captures visual style well, but quality degrades when applied to ResNet features. Improving ResNet with softmax transformation enhances feature activation entropy and significantly improves stylization quality. Architecture for feature extraction is more crucial than learned weights for style transfer.
Extensive research in neural style transfer methods has shown that the correlation between features extracted by a pre-trained VGG network has a remarkable ability to capture the visual style of an image. Surprisingly, however, this stylization quality is not robust and often degrades significantly when applied to features from more advanced and lightweight networks, such as those in the ResNet family. By performing extensive experiments with different network architectures, we find that residual connections, which represent the main architectural difference between VGG and ResNet, produce feature maps of small entropy, which are not suitable for style transfer. To improve the robustness of the ResNet architecture, we then propose a simple yet effective solution based on a softmax transformation of the feature activations that enhances their entropy. Experimental results demonstrate that this small magic can greatly improve the quality of stylization results, even for networks with random weights. This suggests that the architecture used for feature extraction is more important than the use of learned weights for the task of style transfer.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available