论文标题

在患者人群图上无监督的图形变压器的预训练

Unsupervised pre-training of graph transformers on patient population graphs

论文作者

Pellegrini, Chantal, Navab, Nassir, Kazi, Anees

论文摘要

预训练已在机器学习的不同领域取得成功,例如计算机视觉,自然语言处理(NLP)和医学成像。但是,尚未完全探索用于临床数据分析。记录了大量的临床记录,但是对于在小型医院收集或处理罕见疾病的数据可能会稀缺数据和标签。在这种情况下,对较大的未标记临床数据进行预训练可以提高性能。在本文中,我们提出了针对异质的多模式临床数据设计的新型无监督的预训练技术,用于通过利用蒙版语言建模(MLM)启发的患者预测,通过利用对人群图的深度学习来启发。为此,我们进一步提出了一个基于图形转换器的网络,该网络旨在处理异质临床数据。通过将基于掩蔽的预训练与基于变压器的网络相结合,我们将基于掩盖的其他域中训练的成功转化为异质临床数据。我们使用三个医学数据集Tadpole,Mimic-III和一个败血症预测数据集,在自我监督和转移学习设置中展示了我们的预训练方法的好处。我们发现,我们提出的培训方法有助于在患者和人群水平上对数据进行建模,并提高所有数据集中不同微调任务的性能。

Pre-training has shown success in different areas of machine learning, such as Computer Vision, Natural Language Processing (NLP), and medical imaging. However, it has not been fully explored for clinical data analysis. An immense amount of clinical records are recorded, but still, data and labels can be scarce for data collected in small hospitals or dealing with rare diseases. In such scenarios, pre-training on a larger set of unlabelled clinical data could improve performance. In this paper, we propose novel unsupervised pre-training techniques designed for heterogeneous, multi-modal clinical data for patient outcome prediction inspired by masked language modeling (MLM), by leveraging graph deep learning over population graphs. To this end, we further propose a graph-transformer-based network, designed to handle heterogeneous clinical data. By combining masking-based pre-training with a transformer-based network, we translate the success of masking-based pre-training in other domains to heterogeneous clinical data. We show the benefit of our pre-training method in a self-supervised and a transfer learning setting, utilizing three medical datasets TADPOLE, MIMIC-III, and a Sepsis Prediction Dataset. We find that our proposed pre-training methods help in modeling the data at a patient and population level and improve performance in different fine-tuning tasks on all datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源