论文标题
Pointavatar:视频的基于可变形的基于点的头部化身
PointAvatar: Deformable Point-based Head Avatars from Videos
论文作者
论文摘要
从休闲视频序列中创建现实,动画和可靠的头部化身的能力将在通信和娱乐中打开广泛的应用程序。当前方法基于显式3D可变形的网格(3DMM),也可以利用神经隐式表示。前者受固定拓扑的限制,而后者则不容易变形和效率低下。此外,现有的方法纠缠了颜色估计中的照明,因此它们在新环境中重新呈现头像受到限制。相反,我们提出了Pointavatar,这是一种基于可变形点的表示,将源颜色分解为固有的反照率和正常依赖性阴影。我们证明了Pointavatar桥接现有网格和隐式表示之间的差距,将高质量的几何形状和外观与拓扑灵活性,变形易于和呈现效率相结合。我们表明,我们的方法能够使用来自多个来源的单眼视频来生成动画的3D化身,包括手持智能手机,笔记本电脑网络摄像头和互联网视频,在以前的方法失败的情况下,在挑战性的情况下达到最先进的质量,例如,薄膜链,而在训练中比竞争方法更有效率。
The ability to create realistic, animatable and relightable head avatars from casual video sequences would open up wide ranging applications in communication and entertainment. Current methods either build on explicit 3D morphable meshes (3DMM) or exploit neural implicit representations. The former are limited by fixed topology, while the latter are non-trivial to deform and inefficient to render. Furthermore, existing approaches entangle lighting in the color estimation, thus they are limited in re-rendering the avatar in new environments. In contrast, we propose PointAvatar, a deformable point-based representation that disentangles the source color into intrinsic albedo and normal-dependent shading. We demonstrate that PointAvatar bridges the gap between existing mesh- and implicit representations, combining high-quality geometry and appearance with topological flexibility, ease of deformation and rendering efficiency. We show that our method is able to generate animatable 3D avatars using monocular videos from multiple sources including hand-held smartphones, laptop webcams and internet videos, achieving state-of-the-art quality in challenging cases where previous methods fail, e.g., thin hair strands, while being significantly more efficient in training than competing methods.