论文标题

小说视图人类动作综合

Novel-View Human Action Synthesis

论文作者

Lakhal, Mohamed Ilyes, Boscaini, Davide, Poiesi, Fabio, Lanz, Oswald, Cavallaro, Andrea

论文摘要

从真实的角度来看,新颖的人类作用综合旨在从虚拟角度综合身体的运动。我们提出了一种新颖的3D推理来综合目标观点。我们首先估计目标物体的3D网格,然后将粗糙纹理从2D图像传递到网格。由于该转移可能由于框架分辨率或遮挡而在网格上产生稀疏的纹理。我们通过在当地的地质街区和全球范围内遍布对称的语义部分,在当地传播的纹理来产生半密度的纹理网格。接下来,我们介绍一个基于上下文的生成器,以学习如何纠正和完成残留的外观信息。这使网络可以独立专注于学习前景和背景综合任务。我们在公共NTU RGB+D数据集中验证了建议的解决方案。代码和资源可在https://bit.ly/36U3H4K上找到。

Novel-View Human Action Synthesis aims to synthesize the movement of a body from a virtual viewpoint, given a video from a real viewpoint. We present a novel 3D reasoning to synthesize the target viewpoint. We first estimate the 3D mesh of the target body and transfer the rough textures from the 2D images to the mesh. As this transfer may generate sparse textures on the mesh due to frame resolution or occlusions. We produce a semi-dense textured mesh by propagating the transferred textures both locally, within local geodesic neighborhoods, and globally, across symmetric semantic parts. Next, we introduce a context-based generator to learn how to correct and complete the residual appearance information. This allows the network to independently focus on learning the foreground and background synthesis tasks. We validate the proposed solution on the public NTU RGB+D dataset. The code and resources are available at https://bit.ly/36u3h4K.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源