论文标题
合成条:任何大脑图像的颅骨绑扎
SynthStrip: Skull-Stripping for Any Brain Image
论文作者
论文摘要
从磁共振成像(MRI)数据(称为头骨剥离)中去除非脑信号是许多神经图像分析流的组成部分。尽管它们很丰富,但通常是针对具有特定采集特性的图像量身定制的,即近乎各向异性的分辨率和T1加权(T1W)MRI对比度,这些分辨率在研究环境中很普遍。结果,现有工具倾向于适应其他图像类型,例如在诊所常见的快速旋转回声(FSE)MRI中获得的厚切片。虽然近年来基于学习的大脑提取方法已经获得了吸引力,但这些方法面临着类似的负担,因为它们仅对训练过程中看到的图像类型有效。为了在成像协议的景观中实现强大的颅骨折痕,我们引入了Synthstrip,这是一种快速,基于学习的脑萃取工具。通过利用解剖学分割来生成具有解剖学,强度分布和远远超过现实图像范围的解剖学,强度分布和人工制品的完全合成训练数据集,Synthstrip学会了成功推广到各种真实获得的脑图像,从而消除了与目标对比的训练数据的需求。我们证明了合成条的功效对跨受试者人群的各种图像获取和决议的功效,从新生儿到成人。我们在流行的颅骨基线上的准确性有了很大的提高 - 所有这些基线都采用单个训练有素的模型。我们的方法和标记的评估数据可在https://w3id.org/synthstrip上获得。
The removal of non-brain signal from magnetic resonance imaging (MRI) data, known as skull-stripping, is an integral component of many neuroimage analysis streams. Despite their abundance, popular classical skull-stripping methods are usually tailored to images with specific acquisition properties, namely near-isotropic resolution and T1-weighted (T1w) MRI contrast, which are prevalent in research settings. As a result, existing tools tend to adapt poorly to other image types, such as stacks of thick slices acquired with fast spin-echo (FSE) MRI that are common in the clinic. While learning-based approaches for brain extraction have gained traction in recent years, these methods face a similar burden, as they are only effective for image types seen during the training procedure. To achieve robust skull-stripping across a landscape of imaging protocols, we introduce SynthStrip, a rapid, learning-based brain-extraction tool. By leveraging anatomical segmentations to generate an entirely synthetic training dataset with anatomies, intensity distributions, and artifacts that far exceed the realistic range of medical images, SynthStrip learns to successfully generalize to a variety of real acquired brain images, removing the need for training data with target contrasts. We demonstrate the efficacy of SynthStrip for a diverse set of image acquisitions and resolutions across subject populations, ranging from newborn to adult. We show substantial improvements in accuracy over popular skull-stripping baselines -- all with a single trained model. Our method and labeled evaluation data are available at https://w3id.org/synthstrip.