论文标题

mixcl:像素标签对对比学习很重要

MixCL: Pixel label matters to contrastive learning

论文作者

Li, Jun, Quan, Quan, Zhou, S. Kevin

论文摘要

在过去的几年中,对比性学习和自我监督技术在计算机视野中普遍存在。它对于医学图像分析至关重要,这通常因缺乏注释而臭名昭著。在自然成像任务中应用的大多数现有的自我监督方法着重于设计未标记数据的代理任务。例如,对比度学习通常基于以下事实:图像及其转换版本具有相同的身份。但是,像素注释包含许多有价值的医学图像分割信息,这在对比学习中在很大程度上被忽略了。在这项工作中,我们提出了一个新颖的训练框架,称为混合对比度学习(MIXCL),该框架通过保持身份一致性,标签一致性和重建一致性,利用图像身份和像素标签来更好地建模。因此,因此预先训练的模型具有表征医学图像的更强大表示。广泛的实验证明了该方法的有效性,当5%标记的脾脏数据和15%的BTVC分别用于微调时,将基线提高了5.28%和14.12%。

Contrastive learning and self-supervised techniques have gained prevalence in computer vision for the past few years. It is essential for medical image analysis, which is often notorious for its lack of annotations. Most existing self-supervised methods applied in natural imaging tasks focus on designing proxy tasks for unlabeled data. For example, contrastive learning is often based on the fact that an image and its transformed version share the same identity. However, pixel annotations contain much valuable information for medical image segmentation, which is largely ignored in contrastive learning. In this work, we propose a novel pre-training framework called Mixed Contrastive Learning (MixCL) that leverages both image identities and pixel labels for better modeling by maintaining identity consistency, label consistency, and reconstruction consistency together. Consequently, thus pre-trained model has more robust representations that characterize medical images. Extensive experiments demonstrate the effectiveness of the proposed method, improving the baseline by 5.28% and 14.12% in Dice coefficient when 5% labeled data of Spleen and 15% of BTVC are used in fine-tuning, respectively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源