论文标题
样品有效的惯用性检测方法
Sample Efficient Approaches for Idiomaticity Detection
论文作者
论文摘要
深度神经模型,特别是基于变压器的预训练的语言模型,需要大量的数据进行训练。在处理惯用的多词表达式(MWES)时,对数据的需求往往会导致问题,而自然文本中固有的频繁频繁。因此,这项工作探讨了样本有效的惯用性检测方法。特别是,我们研究了模式利用训练(PET)的影响,一种分类方法和伯特拉姆(Bertram),这是一种创建上下文嵌入的有效方法,对特性检测任务。此外,为了进一步探索通用性,我们专注于识别训练数据中不存在的MWE。我们的实验表明,尽管这些方法提高了英语的性能,但它们在葡萄牙和加利西亚人方面的效率要差得多,从而导致与Vanilla Mbert的总体表现。无论如何,我们认为用于识别和代表潜在惯用MWE的样本有效方法非常令人鼓舞,并且具有巨大的未来探索潜力。
Deep neural models, in particular Transformer-based pre-trained language models, require a significant amount of data to train. This need for data tends to lead to problems when dealing with idiomatic multiword expressions (MWEs), which are inherently less frequent in natural text. As such, this work explores sample efficient methods of idiomaticity detection. In particular we study the impact of Pattern Exploit Training (PET), a few-shot method of classification, and BERTRAM, an efficient method of creating contextual embeddings, on the task of idiomaticity detection. In addition, to further explore generalisability, we focus on the identification of MWEs not present in the training data. Our experiments show that while these methods improve performance on English, they are much less effective on Portuguese and Galician, leading to an overall performance about on par with vanilla mBERT. Regardless, we believe sample efficient methods for both identifying and representing potentially idiomatic MWEs are very encouraging and hold significant potential for future exploration.