论文标题
跨模式的深度特征学习脑肿瘤分割
Cross-Modality Deep Feature Learning for Brain Tumor Segmentation
论文作者
论文摘要
机器学习和数字医学图像的流行率的最新进展为通过使用深层卷积神经网络解决了充满挑战的脑肿瘤细分(BTS)任务的机会。但是,与非常广泛的RGB图像数据不同,脑肿瘤分割中使用的医学图像数据在数据量表方面相对稀少,但就模态性属性而言包含更丰富的信息。为此,本文提出了一种新型的跨模式深度特征学习框架,以从多模式MRI数据分割脑肿瘤。核心思想是在多模式数据上开采丰富的模式,以弥补数据量表不足。所提出的跨模式深度特征学习框架包括两个学习过程:跨模式特征过渡(CMFT)过程和跨模式特征融合(CMFF)过程,旨在通过分别从不同模态数据融合不同模态数据来通过跨不同模态数据融合知识来学习丰富的特征表示。与基线方法和最新方法相比,在Brats的基准上进行了全面的实验,该实验表明,提出的跨模式深模性深模式学习框架可以有效地改善脑肿瘤分割性能。
Recent advances in machine learning and prevalence of digital medical images have opened up an opportunity to address the challenging brain tumor segmentation (BTS) task by using deep convolutional neural networks. However, different from the RGB image data that are very widespread, the medical image data used in brain tumor segmentation are relatively scarce in terms of the data scale but contain the richer information in terms of the modality property. To this end, this paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data. The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale. The proposed cross-modality deep feature learning framework consists of two learning processes: the cross-modality feature transition (CMFT) process and the cross-modality feature fusion (CMFF) process, which aims at learning rich feature representations by transiting knowledge across different modality data and fusing knowledge from different modality data, respectively. Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance when compared with the baseline methods and state-of-the-art methods.