论文标题
相互对比的低级别学习,以解除胶质瘤分级的整个幻灯片图像表示
Mutual Contrastive Low-rank Learning to Disentangle Whole Slide Image Representations for Glioma Grading
论文作者
论文摘要
整个幻灯片图像(WSI)为组织学评估和肿瘤的恶性分级提供了有价值的表型信息。基于WSI的分级有望提供快速的诊断支持并促进数字健康。当前,最常用的WSI源自福尔马林固定石蜡(FFPE)和冷冻部分。大多数自动肿瘤分级模型都是根据FFPE切片开发的,该模型可能受组织加工引入的伪像的影响。冷冻部分存在诸如低质量的问题,可能还会影响单一模态内的训练。为了在单个模态训练中克服这个问题,并在脑肿瘤中实现了更好的多模式和判别性表示分解,我们提出了一种相互的对比度低级别学习(MCL)方案,以整合FFPE并进行胶质瘤分级的冷冻切片。我们首先设计了一个相互学习方案,以基于FFPE和冷冻部分共同优化模型培训。在此提议的方案中,我们设计了一个归一化的模态造成损失(NMC-loss),可以促进FFPE的多模式互补表示,并从同一患者中脱离了FFPE和冷冻切片的多模式互补表示。为了降低阶层内差异,并在患者间和患者间水平上增加了类间边缘,我们进行了低级(LR)损失。我们的实验表明,所提出的方案比基于每种单一模态或混合方式训练的模型的性能更好,甚至可以改善基于经典注意力的多个实例学习方法(MIL)中的特征提取。 NMC损失和低级损失的组合表现优于其他典型的对比损失函数。
Whole slide images (WSI) provide valuable phenotypic information for histological assessment and malignancy grading of tumors. The WSI-based grading promises to provide rapid diagnostic support and facilitate digital health. Currently, the most commonly used WSIs are derived from formalin-fixed paraffin-embedded (FFPE) and Frozen section. The majority of automatic tumor grading models are developed based on FFPE sections, which could be affected by the artifacts introduced by tissue processing. The frozen section exists problems such as low quality that might influence training within single modality as well. To overcome this problem in a single modal training and achieve better multi-modal and discriminative representation disentanglement in brain tumor, we propose a mutual contrastive low-rank learning (MCL) scheme to integrate FFPE and frozen sections for glioma grading. We first design a mutual learning scheme to jointly optimize the model training based on FFPE and frozen sections. In this proposed scheme, we design a normalized modality contrastive loss (NMC-loss), which could promote to disentangle multi-modality complementary representation of FFPE and frozen sections from the same patient. To reduce intra-class variance, and increase inter-class margin at intra- and inter-patient levels, we conduct a low-rank (LR) loss. Our experiments show that the proposed scheme achieves better performance than the model trained based on each single modality or mixed modalities and even improves the feature extraction in classical attention-based multiple instances learning methods (MIL). The combination of NMC-loss and low-rank loss outperforms other typical contrastive loss functions.