论文标题

法国临床语言的学习结构:使用电子健康记录中的2100万个临床报告对单词嵌入模型的开发和验证

Learning structures of the French clinical language:development and validation of word embedding models using 21 million clinical reports from electronic health records

论文作者

Dura, Basile, Jean, Charline, Tannier, Xavier, Calliger, Alice, Bey, Romain, Neuraz, Antoine, Flicoteaux, Rémi

论文摘要

背景 使用现实世界数据的临床研究可能受益于利用临床报告,这是一种特别丰富的非结构化培养基。为此,自然语言处理可以提取相关信息。使用预训练的语言模型基于转移学习的方法已在大多数NLP应用程序中实现了最先进的方法;但是,公开可用的模型缺乏接触专业语言,尤其是在医学领域。 客观的 我们旨在评估将语言模型适应法国临床报告对下游医疗NLP任务的影响。 方法 我们利用2017年8月至2021年7月在大巴黎大学医院(APHP)收集的2100万临床报告的语料库,以制作两种有关专业语言的卡梅木木仪架构:一项从SCRATCH中恢复,另一个以Cammembert为初始化。我们使用两个法国注释的医学数据集将我们的语言模型与原始的Camembert网络进行了比较,从而评估了Wilcoxon测试改进的统计意义。 结果 我们在临床报告上预估计的模型将APMED(APHP特定任务)的平均F1得分提高到91%,这是统计学上显着的改善。他们还达到了与Quaero上原始Camembert相当的性能。这些结果适用于很少的预训练样品开始,从而对微调和划痕版本也是如此。 结论 我们证实了以前的文献表明,适应通才培训的语言模型(例如Camenbert on Propecialty Corpora)改善了其下游临床NLP任务的性能。我们的结果表明,与微调相比,从头开始进行重新培训不会引起统计学上显着的性能提高。

Background Clinical studies using real-world data may benefit from exploiting clinical reports, a particularly rich albeit unstructured medium. To that end, natural language processing can extract relevant information. Methods based on transfer learning using pre-trained language models have achieved state-of-the-art results in most NLP applications; however, publicly available models lack exposure to speciality-languages, especially in the medical field. Objective We aimed to evaluate the impact of adapting a language model to French clinical reports on downstream medical NLP tasks. Methods We leveraged a corpus of 21M clinical reports collected from August 2017 to July 2021 at the Greater Paris University Hospitals (APHP) to produce two CamemBERT architectures on speciality language: one retrained from scratch and the other using CamemBERT as its initialisation. We used two French annotated medical datasets to compare our language models to the original CamemBERT network, evaluating the statistical significance of improvement with the Wilcoxon test. Results Our models pretrained on clinical reports increased the average F1-score on APMed (an APHP-specific task) by 3 percentage points to 91%, a statistically significant improvement. They also achieved performance comparable to the original CamemBERT on QUAERO. These results hold true for the fine-tuned and from-scratch versions alike, starting from very few pre-training samples. Conclusions We confirm previous literature showing that adapting generalist pre-train language models such as CamenBERT on speciality corpora improves their performance for downstream clinical NLP tasks. Our results suggest that retraining from scratch does not induce a statistically significant performance gain compared to fine-tuning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源