论文标题

胎儿超声检查的解剖学对比表示学习

Anatomy-Aware Contrastive Representation Learning for Fetal Ultrasound

论文作者

Fu, Zeyu, Jiao, Jianbo, Yasrab, Robail, Drukker, Lior, Papageorghiou, Aris T., Noble, J. Alison

论文摘要

自我监督的对比表示学习提供了从未标记的医学数据集中学习有意义的视觉表示的优势,以进行转移学习。但是,将当前的对比度学习方法应用于医疗数据而不考虑其特定域的解剖特征,可能会导致视觉表示在外观和语义上不一致。在本文中,我们建议通过解剖学对比度学习(AWCL)改善医学图像的视觉表示,该学习结合了解剖学信息,以以对比度学习方式增强正/阴性对采样。为自动化的胎儿超声成像任务展示了所提出的方法,从而使从解剖学上相似的相同或不同的超声扫描的正对实现了积极对,这些扫描在解剖学上相似,可以将其拉在一起,从而改善了表示的学习。我们从经验上研究了与粗粒和细粒度的粒度一起纳入解剖学信息的效果,以进行对比学习,并发现使用细粒度的解剖学信息的学习可以保留阶层内差异比其对应物更有效。我们还分析了解剖比对我们的AWCL框架的影响,发现使用更独特但解剖学上的样品构成正面的样品会产生更好的质量表示。大规模胎儿超声数据集的实验表明,我们的方法对将三个临床下游任务转移到三个临床任务的学习表征有效,并且与受监督的Imagenet和当前的先进对比度学习方法相比,实现了卓越的性能。特别是,在跨域分割任务上,AWCL的表现优于Imagenet监督方法,高于13.8%,基于最先进的对比度方法的方法为7.1%。

Self-supervised contrastive representation learning offers the advantage of learning meaningful visual representations from unlabeled medical datasets for transfer learning. However, applying current contrastive learning approaches to medical data without considering its domain-specific anatomical characteristics may lead to visual representations that are inconsistent in appearance and semantics. In this paper, we propose to improve visual representations of medical images via anatomy-aware contrastive learning (AWCL), which incorporates anatomy information to augment the positive/negative pair sampling in a contrastive learning manner. The proposed approach is demonstrated for automated fetal ultrasound imaging tasks, enabling the positive pairs from the same or different ultrasound scans that are anatomically similar to be pulled together and thus improving the representation learning. We empirically investigate the effect of inclusion of anatomy information with coarse- and fine-grained granularity, for contrastive learning and find that learning with fine-grained anatomy information which preserves intra-class difference is more effective than its counterpart. We also analyze the impact of anatomy ratio on our AWCL framework and find that using more distinct but anatomically similar samples to compose positive pairs results in better quality representations. Experiments on a large-scale fetal ultrasound dataset demonstrate that our approach is effective for learning representations that transfer well to three clinical downstream tasks, and achieves superior performance compared to ImageNet supervised and the current state-of-the-art contrastive learning methods. In particular, AWCL outperforms ImageNet supervised method by 13.8% and state-of-the-art contrastive-based method by 7.1% on a cross-domain segmentation task.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源