论文标题

通过接触反馈和GPU加速机器人模拟的姿势跟踪姿势跟踪

In-Hand Object Pose Tracking via Contact Feedback and GPU-Accelerated Robotic Simulation

论文作者

Liang, Jacky, Handa, Ankur, Van Wyk, Karl, Makoviychuk, Viktor, Kroemer, Oliver, Fox, Dieter

论文摘要

由于严重的阻塞,难以通过机器人手持有和通过机器人手来操纵的物体的姿势。先前的工作已经使用接触反馈和粒子过滤器探索了本地化对象。但是,它们主要集中在静态掌握设置上,而不是当对象在运动时,因为这样做需要建模复杂的接触动力学。在这项工作中,我们建议使用GPU加速的并行机器人模拟和无衍生化的基于样本的优化器来跟踪操纵过程中与接触反馈的内手对象。我们使用物理模拟作为机器人 - 对象相互作用的正向模型,而算法共同优化了模拟的状态和参数,因此它们更好地与现实世界的算法匹配。我们的方法在单个GPU上实时运行(30Hz),并且在模拟实验中达到了6mm的平均点云距离误差,而在现实世界中则达到13mm。查看实验视频,网址为https://sites.google.com/view/in hand-bond-object-pose-tracking/

Tracking the pose of an object while it is being held and manipulated by a robot hand is difficult for vision-based methods due to significant occlusions. Prior works have explored using contact feedback and particle filters to localize in-hand objects. However, they have mostly focused on the static grasp setting and not when the object is in motion, as doing so requires modeling of complex contact dynamics. In this work, we propose using GPU-accelerated parallel robot simulations and derivative-free, sample-based optimizers to track in-hand object poses with contact feedback during manipulation. We use physics simulation as the forward model for robot-object interactions, and the algorithm jointly optimizes for the state and the parameters of the simulations, so they better match with those of the real world. Our method runs in real-time (30Hz) on a single GPU, and it achieves an average point cloud distance error of 6mm in simulation experiments and 13mm in the real-world ones. View experiment videos at https://sites.google.com/view/in-hand-object-pose-tracking/

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源