4.8 Article

One-Shot Adaptation of GAN in Just One CLIP

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2023.3283551

关键词

Adaptation models; Generators; Generative adversarial networks; Data models; Training; Semantics; Task analysis; GAN; CLIP; adaptation; StyleGAN

向作者/读者索取更多资源

In this paper, a novel single-shot GAN adaptation method is proposed through unified CLIP space manipulations to address the overfitting or under-fitting issues when fine-tuning with a single target image. Experimental results demonstrate that the proposed method outperforms baseline models in generating diverse outputs with the target texture and allows more effective attribute editing.
There are many recent research efforts to fine-tune a pre-trained generator with a few target images to generate images of a novel domain. Unfortunately, these methods often suffer from overfitting or under-fitting when fine-tuned with a single target image. To address this, here we present a novel single-shot GAN adaptation method through unified CLIP space manipulations. Specifically, our model employs a two-step training strategy: reference image search in the source generator using a CLIP-guided latent optimization, followed by generator fine-tuning with a novel loss function that imposes CLIP space consistency between the source and adapted generators. To further improve the adapted model to produce spatially consistent samples with respect to the source generator, we also propose contrastive regularization for patchwise relationships in the CLIP space. Experimental results show that our model generates diverse outputs with the target texture and outperforms the baseline models both qualitatively and quantitatively. Furthermore, we show that our CLIP space manipulation strategy allows more effective attribute editing.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据