论文标题

多语言双向编码器的显式对齐目标

Explicit Alignment Objectives for Multilingual Bidirectional Encoders

论文作者

Hu, Junjie, Johnson, Melvin, Firat, Orhan, Siddhant, Aditya, Neubig, Graham

论文摘要

事实证明,预训练的跨语义编码器(如Mbert(Devlin等,2019)和XLMR(Conneau等,2020),已被证明具有令人印象深刻的有效性,可以使NLP系统从高资源语言转移到低资源语言。尽管没有明确的目标将单词/句子的上下文嵌入与同一空间中的语言相似的含义相结合。在本文中,我们提出了一种学习多语言编码器的新方法,即琥珀(对齐的多语言双向编码器)。使用两个明确的比对目标对其他并行数据进行了培训,这些目标是在不同的粒度上对齐的多语言表示。我们针对零射击跨语性转移学习的不同任务进行实验,包括序列标签,句子检索和句子分类。实验结果表明,琥珀在序列标签上获得高达1.1的F1得分,而在XLMR-LARGE模型上,回收的平均准确度高达27.3,该模型具有3.2倍的琥珀色参数。我们的代码和型号可在http://github.com/junjiehu/amber上找到。

Pre-trained cross-lingual encoders such as mBERT (Devlin et al., 2019) and XLMR (Conneau et al., 2020) have proven to be impressively effective at enabling transfer-learning of NLP systems from high-resource languages to low-resource languages. This success comes despite the fact that there is no explicit objective to align the contextual embeddings of words/sentences with similar meanings across languages together in the same space. In this paper, we present a new method for learning multilingual encoders, AMBER (Aligned Multilingual Bidirectional EncodeR). AMBER is trained on additional parallel data using two explicit alignment objectives that align the multilingual representations at different granularities. We conduct experiments on zero-shot cross-lingual transfer learning for different tasks including sequence tagging, sentence retrieval and sentence classification. Experimental results show that AMBER obtains gains of up to 1.1 average F1 score on sequence tagging and up to 27.3 average accuracy on retrieval over the XLMR-large model which has 3.2x the parameters of AMBER. Our code and models are available at http://github.com/junjiehu/amber.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源