论文标题

单眼内窥镜图像的手术机器人仪器的轴的单发姿势估计

Single-Shot Pose Estimation of Surgical Robot Instruments' Shafts from Monocular Endoscopic Images

论文作者

Yoshimura, Masakazu, Marinho, Murilo M., Harada, Kanako, Mitsuishi, Mamoru

论文摘要

手术机器人用于进行微创手术,并减轻对外科医生施加的许多负担。我们的小组已经开发了一个手术机器人,以通过鼻孔进入颅骨底部的肿瘤。为了避免伤害患者,使用避免碰撞的算法,该算法取决于使用仪器轴姿势的精确模型。鉴于由于仪器与其他干扰之间的相互作用,模型的参数可能会随着时间的流逝而变化,因此对仪器轴姿势的在线估计至关重要。在这项工作中,我们提出了一种使用单眼内窥镜估算手术仪器轴的姿势的新方法。我们的方法基于使用自动注释的培训数据集和改进的姿势估计深度学习体系结构。在初步实验中,我们表明我们的方法可以通过使用人工图像来超越基于ART视觉的无标记姿势估计技术(在位置估计中的误差降低55%,音高64%,YAW降低69%)。

Surgical robots are used to perform minimally invasive surgery and alleviate much of the burden imposed on surgeons. Our group has developed a surgical robot to aid in the removal of tumors at the base of the skull via access through the nostrils. To avoid injuring the patients, a collision-avoidance algorithm that depends on having an accurate model for the poses of the instruments' shafts is used. Given that the model's parameters can change over time owing to interactions between instruments and other disturbances, the online estimation of the poses of the instrument's shaft is essential. In this work, we propose a new method to estimate the pose of the surgical instruments' shafts using a monocular endoscope. Our method is based on the use of an automatically annotated training dataset and an improved pose-estimation deep-learning architecture. In preliminary experiments, we show that our method can surpass state of the art vision-based marker-less pose estimation techniques (providing an error decrease of 55% in position estimation, 64% in pitch, and 69% in yaw) by using artificial images.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源