论文标题

基于半监督对比学习

Unsupervised Cross-Modality Domain Adaptation for Vestibular Schwannoma Segmentation and Koos Grade Prediction based on Semi-Supervised Contrastive Learning

论文作者

Han, Luyi, Huang, Yunzhi, Tan, Tao, Mann, Ritse

论文摘要

域的适应性已被广泛采用,以在多供应商和多中心转移样式,并补充缺失的方式。在这一挑战中,我们提出了一个无监督的域适应性框架,用于跨模式的前庭造型瘤(VS)和耳蜗分割和koos等级预测。我们从CET1和HRT2图像中学习共享表示形式,并从潜在表示中恢复另一种模式,我们还利用VS分割和脑部分割的代理任务来限制域适应性图像结构的一致性。在产生缺失的模态之后,使用NNU-NET模型进行VS和耳蜗分割,而半监督的对比度学习预训练方法则用于改善KOOS等级预测的模型性能。在CrossModa验证阶段排行榜上,我们的方法在Task1中获得4等级,平均骰子得分为0.8394,在Task2中排名第二,宏观平均正方形误差为0.3941。我们的代码可在https://github.com/fiy2w/cmda2022.superpolymerization上找到。

Domain adaptation has been widely adopted to transfer styles across multi-vendors and multi-centers, as well as to complement the missing modalities. In this challenge, we proposed an unsupervised domain adaptation framework for cross-modality vestibular schwannoma (VS) and cochlea segmentation and Koos grade prediction. We learn the shared representation from both ceT1 and hrT2 images and recover another modality from the latent representation, and we also utilize proxy tasks of VS segmentation and brain parcellation to restrict the consistency of image structures in domain adaptation. After generating missing modalities, the nnU-Net model is utilized for VS and cochlea segmentation, while a semi-supervised contrastive learning pre-train approach is employed to improve the model performance for Koos grade prediction. On CrossMoDA validation phase Leaderboard, our method received rank 4 in task1 with a mean Dice score of 0.8394 and rank 2 in task2 with Macro-Average Mean Square Error of 0.3941. Our code is available at https://github.com/fiy2W/cmda2022.superpolymerization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源