论文标题
Clamnet:使用对比度学习与可变深度UNET进行医学图像分割
ClamNet: Using contrastive learning with variable depth Unets for medical image segmentation
论文作者
论文摘要
UNET已成为医学图像语义分割的标准方法,以及完全卷积网络(FCN)。为了解决UNET和FCN面临的一些问题,将UNET ++作为UNET的变体引入。 UNET ++为网络提供了可变深度UNET的集合,因此消除了对专业人员估算任务最佳深度的需求。虽然UNET及其所有变体(包括UNET ++)旨在提供能够训练良好而无需大量注释数据的网络,但它们都没有试图完全消除对像素智能注释的数据的需求。获取以诊断为每种疾病的数据以高成本。因此,此类数据很少。在本文中,我们使用对比度学习来训练UNET ++,以使用来自磁共振成像(MRI)(MRI)和计算机断层扫描(CT)(CT)的各种源图像的医学图像进行语义分割,而无需像素化注释。在这里,我们描述了所提出的模型的架构以及所使用的训练方法。这仍然是一项正在进行的工作,因此我们不得在本文中包括结果。结果和训练有素的模型将在出版物或随后的本文中提供有关ARXIV的版本。
Unets have become the standard method for semantic segmentation of medical images, along with fully convolutional networks (FCN). Unet++ was introduced as a variant of Unet, in order to solve some of the problems facing Unet and FCNs. Unet++ provided networks with an ensemble of variable depth Unets, hence eliminating the need for professionals estimating the best suitable depth for a task. While Unet and all its variants, including Unet++ aimed at providing networks that were able to train well without requiring large quantities of annotated data, none of them attempted to eliminate the need for pixel-wise annotated data altogether. Obtaining such data for each disease to be diagnosed comes at a high cost. Hence such data is scarce. In this paper we use contrastive learning to train Unet++ for semantic segmentation of medical images using medical images from various sources including magnetic resonance imaging (MRI) and computed tomography (CT), without the need for pixel-wise annotations. Here we describe the architecture of the proposed model and the training method used. This is still a work in progress and so we abstain from including results in this paper. The results and the trained model would be made available upon publication or in subsequent versions of this paper on arxiv.