论文标题

汽车:医学文本简化的自动完成

AutoMeTS: The Autocomplete for Medical Text Simplification

论文作者

Van, Hoang, Kauchak, David, Leroy, Gondy

论文摘要

文本简化(TS)的目的是将困难的文本转换为更易于理解且更广泛地访问各种读者的版本。在某些领域(例如医疗保健)中,由于必须准确保留信息,因此无法使用全自动方法。取而代之的是,可以使用半自动化的方法来帮助人类作家更快地简化文本。在本文中,我们研究了自动完成在医疗领域中简化文本的应用。我们介绍了一个新的并行医学数据集,该数据集由简单的英语wikipedia句子组成,包括简单的英语wikipedia句子,并检查了在此数据集中验证的神经语言模型(PNLMS)的应用。我们比较了四个PNLM(Bert,Roberta,XLNET和GPT-2),并展示如何简化句子的其他上下文以获得更好的结果(比最佳个体模型的绝对绝对改善为6.17%)。我们还介绍了一个合奏模型,该模型将四个PNLMS结合在一起,并以最佳的单个模型的优于2.1%,从而导致总体单词预测准确性为64.52%。

The goal of text simplification (TS) is to transform difficult text into a version that is easier to understand and more broadly accessible to a wide variety of readers. In some domains, such as healthcare, fully automated approaches cannot be used since information must be accurately preserved. Instead, semi-automated approaches can be used that assist a human writer in simplifying text faster and at a higher quality. In this paper, we examine the application of autocomplete to text simplification in the medical domain. We introduce a new parallel medical data set consisting of aligned English Wikipedia with Simple English Wikipedia sentences and examine the application of pretrained neural language models (PNLMs) on this dataset. We compare four PNLMs(BERT, RoBERTa, XLNet, and GPT-2), and show how the additional context of the sentence to be simplified can be incorporated to achieve better results (6.17% absolute improvement over the best individual model). We also introduce an ensemble model that combines the four PNLMs and outperforms the best individual model by 2.1%, resulting in an overall word prediction accuracy of 64.52%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源