论文标题
仅在一个剪辑中对甘的一试改编
One-Shot Adaptation of GAN in Just One CLIP
论文作者
论文摘要
最近有许多研究工作将带有一些目标图像的预训练发电机微调以生成新域的图像。不幸的是,当用单个目标图像微调时,这些方法通常会遭受过度拟合或不合格的折磨。为了解决这个问题,在这里,我们通过统一的夹子空间操作提出了一种新颖的单发gan适应方法。具体而言,我们的模型采用了两步训练策略:使用夹子引导的潜在优化在源生成器中进行参考图像搜索,然后使用新颖的损耗功能进行微调进行微调,从而在源和适应的发生器之间施加了夹子空间的一致性。为了进一步改善适用的模型,以产生相对于源发生器的空间一致样品,我们还提出了夹夹空间中斑块关系的对比度正则化。实验结果表明,我们的模型以目标纹理产生多种输出,并且在定性和定量上都优于基线模型。此外,我们表明我们的剪辑空间操纵策略允许更有效的属性编辑。
There are many recent research efforts to fine-tune a pre-trained generator with a few target images to generate images of a novel domain. Unfortunately, these methods often suffer from overfitting or under-fitting when fine-tuned with a single target image. To address this, here we present a novel single-shot GAN adaptation method through unified CLIP space manipulations. Specifically, our model employs a two-step training strategy: reference image search in the source generator using a CLIP-guided latent optimization, followed by generator fine-tuning with a novel loss function that imposes CLIP space consistency between the source and adapted generators. To further improve the adapted model to produce spatially consistent samples with respect to the source generator, we also propose contrastive regularization for patchwise relationships in the CLIP space. Experimental results show that our model generates diverse outputs with the target texture and outperforms the baseline models both qualitatively and quantitatively. Furthermore, we show that our CLIP space manipulation strategy allows more effective attribute editing.