论文标题

GP-NET:灵活的观点掌握建议

GP-net: Flexible Viewpoint Grasp Proposal

论文作者

Konrad, Anna, McDonald, John, Villing, Rudi

论文摘要

我们提出了GRASP提案网络(GP-NET),这是一种卷积神经网络模型,可以从灵活的观点(例如如移动操纵器所经历的。为了训练GP-NET,我们合成生成一个包含深度图像和地面真相掌握信息的数据集。在现实世界实验中,我们使用EGAD评估基准测试对两种常用算法,体积握把网络(VGN)和GRASP姿势检测套件(GPD)评估GP-NET。与机器人握把中最新的方法相反,GP-NET可用于从柔性,未知的观点抓住对象,而无需定义工作空间并获得54.4%的成功,而GPD的VGN为51.6%,而GPD为44.2%。我们在https://aucoroboticsmu.github.io/gp-net/上提供ROS包以及我们的代码和预训练的模型。

We present the Grasp Proposal Network (GP-net), a Convolutional Neural Network model which can generate 6-DoF grasps from flexible viewpoints, e.g. as experienced by mobile manipulators. To train GP-net, we synthetically generate a dataset containing depth-images and ground-truth grasp information. In real-world experiments, we use the EGAD evaluation benchmark to evaluate GP-net against two commonly used algorithms, the Volumetric Grasping Network (VGN) and the Grasp Pose Detection package (GPD), on a PAL TIAGo mobile manipulator. In contrast to the state-of-the-art methods in robotic grasping, GP-net can be used for grasping objects from flexible, unknown viewpoints without the need to define the workspace and achieves a grasp success of 54.4% compared to 51.6% for VGN and 44.2% for GPD. We provide a ROS package along with our code and pre-trained models at https://aucoroboticsmu.github.io/GP-net/.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源