论文标题
DeepHandMesh:一个弱监督的深层编码器框架,用于高保真手工网格建模
DeepHandMesh: A Weakly-supervised Deep Encoder-Decoder Framework for High-fidelity Hand Mesh Modeling
论文作者
论文摘要
人的手在与其他人和物体互动中起着核心作用。为了对这种手动作进行现实的复制,必须重建高保真的手网格。在这项研究中,我们首先提出了DeepHandMesh,这是一种用于高保真手网格建模的弱监督的深层编码器框架。我们设计了我们的系统,以端到端和弱监督的方式进行培训;因此,它不需要地面网格。取而代之的是,它取决于诸如3D联合坐标和多视图深度图之类的较弱的监督,它们比地面网格更容易获得,并且不依赖网格拓扑。尽管所提出的DeephandMesh以一种弱监督的方式进行了训练,但与以前的全面监督手模型相比,它提供了更现实的手网格。我们新引入的渗透回避损失通过复制手部零件之间的物理互动进一步改善了结果。最后,我们证明我们的系统也可以成功地应用于一般图像的3D手网格估计。我们的手模型,数据集和代码可在https://mks0601.github.io/deephandmesh/上公开获得。
Human hands play a central role in interacting with other people and objects. For realistic replication of such hand motions, high-fidelity hand meshes have to be reconstructed. In this study, we firstly propose DeepHandMesh, a weakly-supervised deep encoder-decoder framework for high-fidelity hand mesh modeling. We design our system to be trained in an end-to-end and weakly-supervised manner; therefore, it does not require groundtruth meshes. Instead, it relies on weaker supervisions such as 3D joint coordinates and multi-view depth maps, which are easier to get than groundtruth meshes and do not dependent on the mesh topology. Although the proposed DeepHandMesh is trained in a weakly-supervised way, it provides significantly more realistic hand mesh than previous fully-supervised hand models. Our newly introduced penetration avoidance loss further improves results by replicating physical interaction between hand parts. Finally, we demonstrate that our system can also be applied successfully to the 3D hand mesh estimation from general images. Our hand model, dataset, and codes are publicly available at https://mks0601.github.io/DeepHandMesh/.