论文标题
BeautyRec:稳健,高效且具有内容的化妆转移
BeautyREC: Robust, Efficient, and Content-preserving Makeup Transfer
论文作者
论文摘要
在这项工作中,我们提出了一种健壮,高效且特定于成分的化妆转移方法(缩写为BeautyRec)。与先前的方法相偏离的独特方法,这些方法利用了全球关注,简单的串联特征或隐式操纵潜在空间中的特征,我们提出了一种特定于组件的对应关系,以将参考图像的化妆样式直接传递到相应的组件(例如,皮肤,嘴唇,眼睛,眼睛)的材料图像,使精细的本地化妆品转移。作为辅助设备,引入了变压器的远距离视觉依赖性,以进行有效的全局化妆转移。我们采用内容一致性损失与内容编码器相结合的内容一致性损失,而不是常用的循环结构,以实现有效的单路彩妆传递。这项研究的关键见解是对局部化妆转移的特定组件特定对应关系进行建模,捕获全局化妆转移的长期依赖性,并通过单路径结构实现有效的化妆转移。我们还为BeautyFace提供了一个化妆传输数据集来补充现有数据集。该数据集包含3,000张面孔,涵盖了更多多样化的妆容,面部姿势和种族。每张面都有注释的解析图。广泛的实验证明了我们方法对最新方法的有效性。此外,我们的方法很吸引人,因为它仅具有1M参数,表现优于最先进的方法(Beautygan:843m,PSGAN:12.62m,SCGAN,SCGAN:15.30m,CPM,CPM:9.24m,SSAT:10.48M)。
In this work, we propose a Robust, Efficient, and Component-specific makeup transfer method (abbreviated as BeautyREC). A unique departure from prior methods that leverage global attention, simply concatenate features, or implicitly manipulate features in latent space, we propose a component-specific correspondence to directly transfer the makeup style of a reference image to the corresponding components (e.g., skin, lips, eyes) of a source image, making elaborate and accurate local makeup transfer. As an auxiliary, the long-range visual dependencies of Transformer are introduced for effective global makeup transfer. Instead of the commonly used cycle structure that is complex and unstable, we employ a content consistency loss coupled with a content encoder to implement efficient single-path makeup transfer. The key insights of this study are modeling component-specific correspondence for local makeup transfer, capturing long-range dependencies for global makeup transfer, and enabling efficient makeup transfer via a single-path structure. We also contribute BeautyFace, a makeup transfer dataset to supplement existing datasets. This dataset contains 3,000 faces, covering more diverse makeup styles, face poses, and races. Each face has annotated parsing map. Extensive experiments demonstrate the effectiveness of our method against state-of-the-art methods. Besides, our method is appealing as it is with only 1M parameters, outperforming the state-of-the-art methods (BeautyGAN: 8.43M, PSGAN: 12.62M, SCGAN: 15.30M, CPM: 9.24M, SSAT: 10.48M).