论文标题

Flex:全身抓握无全身掌握

FLEX: Full-Body Grasping Without Full-Body Grasps

论文作者

Tendulkar, Purva, Surís, Dídac, Vondrick, Carl

论文摘要

在AR/VR,视频游戏和机器人技术中的应用程序中,将3D人类化身综合的3D人类化身与场景实际相互作用是一个重要的问题。为了实现这一目标,我们解决了生成虚拟人(手和全身)抓住日常物体的任务。现有方法通过收集人类与对象相互作用并在此数据上进行培训来解决此问题。但是,1)这些方法不会推广到不同的物体位置和方向,或者在现场中存在家具,以及2)它们产生的全身姿势的多样性非常有限。在这项工作中,我们解决了以上所有挑战,以在日常场景中产生现实,多样化的全身掌握,而无需任何3D全身握把数据。我们的关键见解是利用全身姿势和手抓手的存在,使用3D几何约束来获得全身的掌握。我们从经验上验证这些约束可以产生各种可行的人掌握,这些抓地力既优于基准,既有定量和定性。有关更多详细信息,请参见我们的网页:https://flex.cs.columbia.edu/。

Synthesizing 3D human avatars interacting realistically with a scene is an important problem with applications in AR/VR, video games and robotics. Towards this goal, we address the task of generating a virtual human -- hands and full body -- grasping everyday objects. Existing methods approach this problem by collecting a 3D dataset of humans interacting with objects and training on this data. However, 1) these methods do not generalize to different object positions and orientations, or to the presence of furniture in the scene, and 2) the diversity of their generated full-body poses is very limited. In this work, we address all the above challenges to generate realistic, diverse full-body grasps in everyday scenes without requiring any 3D full-body grasping data. Our key insight is to leverage the existence of both full-body pose and hand grasping priors, composing them using 3D geometrical constraints to obtain full-body grasps. We empirically validate that these constraints can generate a variety of feasible human grasps that are superior to baselines both quantitatively and qualitatively. See our webpage for more details: https://flex.cs.columbia.edu/.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源