论文标题
通过一次扫描建造3D形态模型
Building 3D Morphable Models from a Single Scan
论文作者
论文摘要
我们提出了一种从单个3D网格构建3D对象的生成模型的方法。我们的方法产生了一个3D形态模型,该模型代表了高斯过程的形状和反照率。我们将物理(3D)空间中的形状变形和反照率变形定义为物理空间和颜色空间变形的组合。尽管以前的方法通常已经从多个高质量的3D扫描通过主成分分析构建了3D形态模型,但我们通过单个扫描或模板构建了3D形态模型。正如我们在面域中所证明的那样,这些模型可用于从2D数据(逆图)或3D数据(注册)中推断3D重建。具体来说,我们表明我们的方法只能使用单个3D扫描(一个扫描总数,不是每人一个)来执行面部识别,并进一步证明如何在不需要密集对应的情况下进行多次扫描以提高性能。我们的方法使3D对象类别的3D形态模型构成了多个扫描之间的密集对应关系。我们通过为鱼类和鸟类构建其他3D形态模型来证明这一点,并使用它们执行简单的逆渲染任务。我们共享用于生成这些模型并执行我们的反向渲染和注册实验的代码。
We propose a method for constructing generative models of 3D objects from a single 3D mesh. Our method produces a 3D morphable model that represents shape and albedo in terms of Gaussian processes. We define the shape deformations in physical (3D) space and the albedo deformations as a combination of physical-space and color-space deformations. Whereas previous approaches have typically built 3D morphable models from multiple high-quality 3D scans through principal component analysis, we build 3D morphable models from a single scan or template. As we demonstrate in the face domain, these models can be used to infer 3D reconstructions from 2D data (inverse graphics) or 3D data (registration). Specifically, we show that our approach can be used to perform face recognition using only a single 3D scan (one scan total, not one per person), and further demonstrate how multiple scans can be incorporated to improve performance without requiring dense correspondence. Our approach enables the synthesis of 3D morphable models for 3D object categories where dense correspondence between multiple scans is unavailable. We demonstrate this by constructing additional 3D morphable models for fish and birds and use them to perform simple inverse rendering tasks. We share the code used to generate these models and to perform our inverse rendering and registration experiments.