论文标题
视频支气管镜检查中的弱监督气道孔口细分
Weakly Supervised Airway Orifice Segmentation in Video Bronchoscopy
论文作者
论文摘要
常规进行了视频支气管镜检查,以涉嫌癌症,监测COPD患者的肺组织活检以及在重症监护病房中澄清急性呼吸问题。复杂的支气管树中的导航尤其具有挑战性和身体要求,需要医生的长期经验。本文介绍了支气管镜视频中支气管孔的自动分割。由于缺乏易于获取的地面真相分段数据,目前对基于学习的方法进行了深度学习的方法。因此,我们提出了一个由K均值组成的数据驱动管道,然后是一种基于紧凑的标记的分水岭算法,该算法能够从给定的深度图像中生成气道实例分割图。通过这种方式,这些传统算法是仅基于Phantom数据集的RGB图像上训练浅CNN的弱监督。我们评估了该模型在两个体内数据集上的概括能力,涵盖了21个不同的支气管镜上的250帧。我们证明了它的性能与那些在体内数据中直接训练的模型相媲美,通过128x128的图像分辨率,检测到的气道细分中心的平均误差为11 vs 5像素。我们的定量和定性结果表明,在视频支气管镜检查,幻影数据和弱监督的背景下,使用基于非学习的方法可以获得对气道结构的语义理解。
Video bronchoscopy is routinely conducted for biopsies of lung tissue suspected for cancer, monitoring of COPD patients and clarification of acute respiratory problems at intensive care units. The navigation within complex bronchial trees is particularly challenging and physically demanding, requiring long-term experiences of physicians. This paper addresses the automatic segmentation of bronchial orifices in bronchoscopy videos. Deep learning-based approaches to this task are currently hampered due to the lack of readily-available ground truth segmentation data. Thus, we present a data-driven pipeline consisting of a k-means followed by a compact marker-based watershed algorithm which enables to generate airway instance segmentation maps from given depth images. In this way, these traditional algorithms serve as weak supervision for training a shallow CNN directly on RGB images solely based on a phantom dataset. We evaluate generalization capabilities of this model on two in-vivo datasets covering 250 frames on 21 different bronchoscopies. We demonstrate that its performance is comparable to those models being directly trained on in-vivo data, reaching an average error of 11 vs 5 pixels for the detected centers of the airway segmentation by an image resolution of 128x128. Our quantitative and qualitative results indicate that in the context of video bronchoscopy, phantom data and weak supervision using non-learning-based approaches enable to gain a semantic understanding of airway structures.