论文标题
神经脱发
Neural Hair Rendering
论文作者
论文摘要
在本文中,我们提出了一种基于神经的头发渲染管道,该管道可以从虚拟3D发型模型中综合照片真实的图像。与需要模型级相似性以保留真实图像和假渲染的一致结构表示的现有监督翻译方法不同,我们的方法采用了一种无监督的解决方案来处理任意头发模型。我们方法的关键组成部分是一个共享的潜在空间,用于编码两个域的外观不变结构信息,该信息生成了由额外的外观输入调节的逼真的效果图。这是通过域特异性的预触发结构表示,部分共享的域编码层和结构鉴别器来实现的。我们还提出了一种简单而有效的时间调节方法,以实施视频序列生成的一致性。我们通过对大量肖像进行测试,并将其与替代基线和最新的无监督图像翻译方法进行比较来证明我们的方法的优势。
In this paper, we propose a generic neural-based hair rendering pipeline that can synthesize photo-realistic images from virtual 3D hair models. Unlike existing supervised translation methods that require model-level similarity to preserve consistent structure representation for both real images and fake renderings, our method adopts an unsupervised solution to work on arbitrary hair models. The key component of our method is a shared latent space to encode appearance-invariant structure information of both domains, which generates realistic renderings conditioned by extra appearance inputs. This is achieved by domain-specific pre-disentangled structure representation, partially shared domain encoder layers and a structure discriminator. We also propose a simple yet effective temporal conditioning method to enforce consistency for video sequence generation. We demonstrate the superiority of our method by testing it on a large number of portraits and comparing it with alternative baselines and state-of-the-art unsupervised image translation methods.