3.8 Proceedings Paper

Deep Preset: Blending and Retouching Photos with Color Style Transfer

Ask authors/readers for more resources

This work focuses on learning low-level image transformation, especially color-shifting methods, and introduces a novel supervised approach for color style transfer. Experimental results demonstrate that Deep Preset outperforms previous works in color style transfer both quantitatively and qualitatively.
End-users, without knowledge in photography, desire to beautify their photos to have a similar color style as a well-retouched reference. However, recent works in image style transfer are overused. They usually synthesize undesirable results due to transferring exact colors to the wrong destination. It becomes even worse in sensitive cases such as portraits. In this work, we concentrate on learning low-level image transformation, especially color-shifting methods, rather than contextual features matching, then present a novel supervised approach for color style transfer. Furthermore, we propose a color style transfer named Deep Preset designed to 1) generalize the features representing the color transformation from content with natural colors to retouched reference, then blend it into the contextual features of content, 2) predict hyper-parameters (settings or preset) of the applied low-level color transformation methods, 3) stylize content image to have a similar color style as reference. We script Lightroom, a powerful tool in editing photos, to generate 600,000 training samples using 1,200 images from the Flick2K dataset and 500 user-generated presets with 69 settings. Experimental results show that our Deep Preset outperforms the previous works in color style transfer quantitatively and qualitatively. Our work is available at https://minhmanho.github.io/deep_preset/.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available