4.3 Article

End-to-end learning for arbitrary image style transfer

Journal

ELECTRONICS LETTERS
Volume 54, Issue 22, Pages 1276-1277

Publisher

INST ENGINEERING TECHNOLOGY-IET
DOI: 10.1049/el.2018.6497

Keywords

encoding; image classification; optimisation; stylised images; real-time arbitrary style transfer; feed-forward network; pre-trained encoder; trainable decoder; style quality; output image; image classification task; end-to-end learning scheme; fixed encoder; arbitrary image style transfer

Funding

  1. Basic Science Research Program through the National Research Foundation of Korea (NRF) - Ministry of Science, ICT & Future Planning [NRF-2018R1C1B6004056]

Ask authors/readers for more resources

Real-time arbitrary style transfer is based on a feed-forward network, which consists of a pre-trained encoder, a feature transformer, and a trainable decoder. However, the previous approach has some degrade in style quality of output image because the pre-trained encoder is not optimised for image style transfer but originally for image classification task. An end-to-end learning scheme is introduced that optimises the encoder as well as the decoder for the task of arbitrary image style transfer. Experiments conducted with a public database proves that the style transfer network trained with the end-to-end learning scheme outperforms the network with a fixed encoder in terms of minimising both content and style losses and quality of the stylised images.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available