论文标题
从人类演示视频中学习合作动态操纵技巧
Learning Cooperative Dynamic Manipulation Skills from Human Demonstration Videos
论文作者
论文摘要
本文提出了一种从离线视频中学习和机器人复制的方法。目的是将学习概念从演示(LFD)扩展到动态场景,从而受益于广泛可用或易于生产的离线视频。为了实现这一目标,我们对重要的动态信息进行了解码,例如配置依赖性刚度(CD),从三维人体骨架模型中揭示了手臂姿势对臂端点刚度的贡献。接下来,通过通过高斯混合模型(GMM)编码CD,并通过高斯混合物回归(GMR)解码,估计和复制机器人的笛卡尔阻抗曲线。我们考虑了环境限制和动态不确定性,在与领导者追随者结构的协作锯切任务中演示了所提出的方法。实验设置包括两个熊猫机器人,这些机器人复制领导者的角色以及从两翼锯视频中提取的阻抗概况。
This article proposes a method for learning and robotic replication of dynamic collaborative tasks from offline videos. The objective is to extend the concept of learning from demonstration (LfD) to dynamic scenarios, benefiting from widely available or easily producible offline videos. To achieve this goal, we decode important dynamic information, such as the Configuration Dependent Stiffness (CDS), which reveals the contribution of arm pose to the arm endpoint stiffness, from a three-dimensional human skeleton model. Next, through encoding of the CDS via Gaussian Mixture Model (GMM) and decoding via Gaussian Mixture Regression (GMR), the robot's Cartesian impedance profile is estimated and replicated. We demonstrate the proposed method in a collaborative sawing task with leader-follower structure, considering environmental constraints and dynamic uncertainties. The experimental setup includes two Panda robots, which replicate the leader-follower roles and the impedance profiles extracted from a two-persons sawing video.