论文标题
使用对抗一致性损失的未配对图像到图像翻译
Unpaired Image-to-Image Translation using Adversarial Consistency Loss
论文作者
论文摘要
未配对的图像到图像翻译是一类视力问题,其目标是使用未配对的训练数据在不同的图像域之间找到映射。循环一致性丢失是针对此类问题的广泛使用的约束。但是,由于严格的像素级约束,它无法执行几何更改,删除大对象或忽略无关的纹理。在本文中,我们提出了一种新颖的对抗性一致性损失,用于图像到图像翻译。此损失不需要翻译后的图像为特定的源图像,但可以鼓励翻译的图像保留源图像的重要特征,并克服上述周期矛盾损失的缺点。我们的方法实现了三个具有挑战性的任务的最先进结果:拆除眼镜,男性到女性翻译和自拍到自拍照的翻译。
Unpaired image-to-image translation is a class of vision problems whose goal is to find the mapping between different image domains using unpaired training data. Cycle-consistency loss is a widely used constraint for such problems. However, due to the strict pixel-level constraint, it cannot perform geometric changes, remove large objects, or ignore irrelevant texture. In this paper, we propose a novel adversarial-consistency loss for image-to-image translation. This loss does not require the translated image to be translated back to be a specific source image but can encourage the translated images to retain important features of the source images and overcome the drawbacks of cycle-consistency loss noted above. Our method achieves state-of-the-art results on three challenging tasks: glasses removal, male-to-female translation, and selfie-to-anime translation.