论文标题
在结构化环境中,视觉致姿势控制软连续臂的姿势
Visual Servoing for Pose Control of Soft Continuum Arm in a Structured Environment
论文作者
论文摘要
对于柔软的连续臂,Visual Servoing是一种流行的控制策略,依靠视觉反馈来关闭控制循环。但是,强大的视觉致暗销是具有挑战性的,因为它需要从图像,准确的控制模型和传感器中可靠的特征提取才能感知手臂的形状,这两者都很难在软机器人中实现。这封信通过提出一种基于神经网络的深层方法来避免这些挑战,以通过使用安装在手臂远端的摄像机进行视觉伺服在柔软的手臂上执行平滑而健壮的3D定位任务。训练了卷积神经网络,以预测在结构化环境中实现所需姿势所需的驱动。提出了用于估计图像驱动的集成和模块化方法,并在实验上比较。实施了比例控制定律,以减少相机所见所需图像和当前图像之间的误差。该模型以及比例反馈控制使所描述的方法可鲁棒化,例如新目标,照明,负载和软臂的减小。此外,该模型以最小的努力将自己转移到新环境中。
For soft continuum arms, visual servoing is a popular control strategy that relies on visual feedback to close the control loop. However, robust visual servoing is challenging as it requires reliable feature extraction from the image, accurate control models and sensors to perceive the shape of the arm, both of which can be hard to implement in a soft robot. This letter circumvents these challenges by presenting a deep neural network-based method to perform smooth and robust 3D positioning tasks on a soft arm by visual servoing using a camera mounted at the distal end of the arm. A convolutional neural network is trained to predict the actuations required to achieve the desired pose in a structured environment. Integrated and modular approaches for estimating the actuations from the image are proposed and are experimentally compared. A proportional control law is implemented to reduce the error between the desired and current image as seen by the camera. The model together with the proportional feedback control makes the described approach robust to several variations such as new targets, lighting, loads, and diminution of the soft arm. Furthermore, the model lends itself to be transferred to a new environment with minimal effort.