3.8 Proceedings Paper

Domain-Aware Universal Style Transfer

出版社

IEEE
DOI: 10.1109/ICCV48922.2021.01434

关键词

-

资金

  1. National Research Foundation of Korea - Korea government (MSIT) [2019R1A2C2003760]
  2. Institute for Information& Communications Technology Planning& Evaluation (IITP) - Korea government [2020-0-01361]
  3. National Research Foundation of Korea [2019R1A2C2003760] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

向作者/读者索取更多资源

The study introduces a Domain-aware Style Transfer Networks (DSTN) that not only transfers style from a given reference image but also transfers the domain property. By designing a novel domainness indicator and introducing domain-aware skip connection, the model achieves better qualitative results and outperforms previous methods.
Style transfer aims to reproduce content images with the styles from reference images. Existing universal style transfer methods successfully deliver arbitrary styles to original images either in an artistic or a photo-realistic way. However, the range of arbitrary style defined by existing works is bounded in the particular domain due to their structural limitation. Specifically, the degrees of content preservation and stylization are established according to a predefined target domain. As a result, both photo-realistic and artistic models have difficulty in performing the desired style transfer for the other domain. To overcome this limitation, we propose a unified architecture, Domain-aware Style Transfer Networks (DSTN) that transfer not only the style but also the property of domain (i.e., domainness) from a given reference image. To this end, we design a novel domainness indicator that captures the domainness value from the texture and structural features of reference images. Moreover, we introduce a unified framework with domainaware skip connection to adaptively transfer the stroke and palette to the input contents guided by the domainness indicator. Our extensive experiments validate that our model produces better qualitative results and outperforms previous methods in terms of proxy metrics on both artistic and photo-realistic stylizations. All codes and pre-trained weights are available at Kibeom-Hong/Domain-Aware-StyleTransfer.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据