论文标题
用于高分辨率光触觉传感的SIM到现实:从图像到3D接触力分布
Sim-to-real for high-resolution optical tactile sensing: From images to 3D contact force distributions
论文作者
论文摘要
基于视觉的触觉传感器捕获的图像具有有关高分辨率触觉场的信息,例如应用于其软感应表面的接触力的分布。但是,提取图像中编码的信息是具有挑战性的,并且通常以基于学习的方法来解决,这些方法通常需要大量的培训数据。本文提出了一种基于内部相机的基于视觉的触觉传感器在模拟中生成触觉图像的策略,该摄像头跟踪了软材料中球形颗粒的运动。在各种接触条件下,在有限元环境中模拟了材料的变形,并将球形颗粒投影到模拟图像。从图像中提取的特征被映射到3D接触力分布,也通过有限元模拟获得了地面真实,并通过人工神经网络获得了完全训练的合成数据,避免了对现实世界数据收集的需求。当对实际触觉图像进行评估时,所得模型表现出很高的精度,可以在多个触觉传感器上转移而无需进一步训练,并且适合有效的实时推理。
The images captured by vision-based tactile sensors carry information about high-resolution tactile fields, such as the distribution of the contact forces applied to their soft sensing surface. However, extracting the information encoded in the images is challenging and often addressed with learning-based approaches, which generally require a large amount of training data. This article proposes a strategy to generate tactile images in simulation for a vision-based tactile sensor based on an internal camera that tracks the motion of spherical particles within a soft material. The deformation of the material is simulated in a finite element environment under a diverse set of contact conditions, and spherical particles are projected to a simulated image. Features extracted from the images are mapped to the 3D contact force distribution, with the ground truth also obtained via finite-element simulations, with an artificial neural network that is therefore entirely trained on synthetic data avoiding the need for real-world data collection. The resulting model exhibits high accuracy when evaluated on real-world tactile images, is transferable across multiple tactile sensors without further training, and is suitable for efficient real-time inference.