论文标题
通过从野外图像中删除遮挡头发来调节3D头部形状
Learning to regulate 3D head shape by removing occluding hair from in-the-wild images
论文作者
论文摘要
与仅对面部进行建模的早期方法相比,最近的3D面部重建方法重建了整个头部。尽管这些方法准确地重建了面部特征,但它们并未明确调节头部上部。由于头发的闭塞程度不同,提取有关头部这部分的信息是具有挑战性的。我们提出了一种新颖的方法,可以通过去除遮挡头发并重建皮肤,揭示有关头部形状的信息来建模上头。我们介绍了三个目标:1)骰子一致性损失,该骰子一致性损失在源的整体头部形状和渲染图像之间都相似,2)尺度一致性损失,以确保头部形状准确地复制,即使头部的上部不可见,3)A 71 Landmark检测器使用移动的平均损失功能来检测额外的地标在头部上训练。这些目标用于以无监督的方式训练编码器,以从野外输入图像中回归火焰参数。我们无监督的3MM模型可在流行的基准上实现最先进的结果,可用于推断动画或化身创建中直接使用的头部形状,面部特征和纹理。
Recent 3D face reconstruction methods reconstruct the entire head compared to earlier approaches which only model the face. Although these methods accurately reconstruct facial features, they do not explicitly regulate the upper part of the head. Extracting information about this part of the head is challenging due to varying degrees of occlusion by hair. We present a novel approach for modeling the upper head by removing occluding hair and reconstructing the skin, revealing information about the head shape. We introduce three objectives: 1) a dice consistency loss that enforces similarity between the overall head shape of the source and rendered image, 2) a scale consistency loss to ensure that head shape is accurately reproduced even if the upper part of the head is not visible, and 3) a 71 landmark detector trained using a moving average loss function to detect additional landmarks on the head. These objectives are used to train an encoder in an unsupervised manner to regress FLAME parameters from in-the-wild input images. Our unsupervised 3DMM model achieves state-of-the-art results on popular benchmarks and can be used to infer the head shape, facial features, and textures for direct use in animation or avatar creation.