论文标题
连续的海马分割与变压器
Continual Hippocampus Segmentation with Transformers
论文作者
论文摘要
在临床环境中,收购条件和患者人群会随着时间而变化,持续学习是确保安全使用深神经网络的关键。然而,大多数现有的工作都集中在卷积架构和图像分类上。取而代之的是,放射科医生更喜欢使用概述特定利益区域的分割模型,为此,基于变压器的体系结构正在获得吸引力。变形金刚的自我发挥机制可能会减轻灾难性的遗忘,为更强大的医疗图像分割开辟道路。在这项工作中,我们探讨了最近在依次学习方案中进行语义细分的新变压器机制,并分析如何最好地适应这种环境的持续学习策略。我们对海马分割的评估表明,与纯粹的卷积架构相比,变压器机制减轻了医疗图像分割的灾难性遗忘,并证明应谨慎地将VIT模块进行正则化。
In clinical settings, where acquisition conditions and patient populations change over time, continual learning is key for ensuring the safe use of deep neural networks. Yet most existing work focuses on convolutional architectures and image classification. Instead, radiologists prefer to work with segmentation models that outline specific regions-of-interest, for which Transformer-based architectures are gaining traction. The self-attention mechanism of Transformers could potentially mitigate catastrophic forgetting, opening the way for more robust medical image segmentation. In this work, we explore how recently-proposed Transformer mechanisms for semantic segmentation behave in sequential learning scenarios, and analyse how best to adapt continual learning strategies for this setting. Our evaluation on hippocampus segmentation shows that Transformer mechanisms mitigate catastrophic forgetting for medical image segmentation compared to purely convolutional architectures, and demonstrates that regularising ViT modules should be done with caution.