论文标题

深度感知的生成对抗网络,用于交谈的主视频生成

Depth-Aware Generative Adversarial Network for Talking Head Video Generation

论文作者

Hong, Fa-Ting, Zhang, Longhao, Shen, Li, Xu, Dan

论文摘要

会说话的主视频生成旨在制作一个合成的人脸视频,该视频包含从给定的源图像和驾驶视频中分别构成身份的信息。在此任务中存在工作的作品在很大程度上依赖于2D表示(例如外观和运动)从输入图像中学到的。但是,密集的3D面部几何形状(例如像素的深度)对于此任务非常重要,因为它对我们本质上生成准确的3D面结构并将嘈杂的信息与可能的杂物背景区分开来是特别有益的。然而,对于视频而言,密集的3D几何注释的昂贵,通常不适合此视频生成任务。在本文中,我们首先引入了一种自我监督的几何学学习方法,以自动从面部视频中自动恢复密集的3D几何(即深度),而无需任何昂贵的3D注释数据。基于学到的密集深度图,我们进一步提议利用它们来估算捕获人头关键运动的稀疏面部关键。以更密集的方式,还利用深度来学习3D感知的跨模式(即外观和深度)的注意,以指导翘曲源图像表示的运动场的产生。所有这些贡献构成了一种新颖的深度感知生成的对抗网络(Dagan),以发挥交谈。进行的广泛实验表明,我们提出的方法可以产生高度逼真的面孔,并在看不见的人面前取得重大结果。

Talking head video generation aims to produce a synthetic human face video that contains the identity and pose information respectively from a given source image and a driving video.Existing works for this task heavily rely on 2D representations (e.g. appearance and motion) learned from the input images. However, dense 3D facial geometry (e.g. pixel-wise depth) is extremely important for this task as it is particularly beneficial for us to essentially generate accurate 3D face structures and distinguish noisy information from the possibly cluttered background. Nevertheless, dense 3D geometry annotations are prohibitively costly for videos and are typically not available for this video generation task. In this paper, we first introduce a self-supervised geometry learning method to automatically recover the dense 3D geometry (i.e.depth) from the face videos without the requirement of any expensive 3D annotation data. Based on the learned dense depth maps, we further propose to leverage them to estimate sparse facial keypoints that capture the critical movement of the human head. In a more dense way, the depth is also utilized to learn 3D-aware cross-modal (i.e. appearance and depth) attention to guide the generation of motion fields for warping source image representations. All these contributions compose a novel depth-aware generative adversarial network (DaGAN) for talking head generation. Extensive experiments conducted demonstrate that our proposed method can generate highly realistic faces, and achieve significant results on the unseen human faces.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源