论文标题

从合成误差增加的医疗图像分割的简化网络

Label Refinement Network from Synthetic Error Augmentation for Medical Image Segmentation

论文作者

Chen, Shuai, Garcia-Uceda, Antonio, Su, Jiahang, van Tulder, Gijs, Wolff, Lennard, van Walsum, Theo, de Bruijne, Marleen

论文摘要

用于图像分割的深卷卷积神经网络不会明确地学习标签结构,并且可能会在隔离的圆柱形结构(例如气道或血管)分割时产生不正确的结构的分割。在本文中,我们提出了一种新型的标签改进方法,以从初始分割中纠正此类错误,并隐含地包含有关标签结构的信息。该方法具有两个新颖的部分:1)生成合成结构误差的模型,以及2)产生合成分割(带有误差)的标签外观模拟网络,其外观与实际初始分段相似。使用这些综合分段和原始图像,对标签改进网络进行了训练,以纠正错误并改善初始分割。该方法对两个分割任务进行了验证:来自胸部计算机断层扫描(CT)扫描和大脑3D CT血管造影(CTA)图像的脑血管分割的气道分割。在这两种应用中,我们的方法都大大优于标准的3D U-NET和其他先前的改进方法。当使用其他未标记的数据进行模型培训时,改进甚至更大。在一项消融研究中,我们证明了所提出方法的不同组成部分的值。

Deep convolutional neural networks for image segmentation do not learn the label structure explicitly and may produce segmentations with an incorrect structure, e.g., with disconnected cylindrical structures in the segmentation of tree-like structures such as airways or blood vessels. In this paper, we propose a novel label refinement method to correct such errors from an initial segmentation, implicitly incorporating information about label structure. This method features two novel parts: 1) a model that generates synthetic structural errors, and 2) a label appearance simulation network that produces synthetic segmentations (with errors) that are similar in appearance to the real initial segmentations. Using these synthetic segmentations and the original images, the label refinement network is trained to correct errors and improve the initial segmentations. The proposed method is validated on two segmentation tasks: airway segmentation from chest computed tomography (CT) scans and brain vessel segmentation from 3D CT angiography (CTA) images of the brain. In both applications, our method significantly outperformed a standard 3D U-Net and other previous refinement approaches. Improvements are even larger when additional unlabeled data is used for model training. In an ablation study, we demonstrate the value of the different components of the proposed method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源