论文标题
没有留下的语言:按以人为本的机器翻译
No Language Left Behind: Scaling Human-Centered Machine Translation
论文作者
论文摘要
在全球范围内消除语言障碍的目标的驱动下,机器翻译已巩固自己是当今人工智能研究的关键重点。但是,这样的努力围绕着一小部分语言结合在一起,留下了绝大多数低资源的语言。在确保安全,高质量的结果的同时,在牢记道德考虑的同时,打破200个语言障碍需要什么?没有任何语言,我们首先通过与母语人士的探索性访谈对低资源语言翻译支持的需求进行了背景,以应对这一挑战。然后,我们创建了旨在缩小低资源和高资源语言之间的性能差距的数据集和模型。更具体地说,我们开发了一种有条件的计算模型,基于专家的稀疏封闭式混合物,该模型经过针对针对低资源语言量身定制的新颖有效的数据挖掘技术培训的。我们提出了多种建筑和培训改进,以抵消数千个任务的培训。至关重要的是,我们使用人类翻译的基准,Flores-200评估了40,000多种不同的翻译方向的性能,并将人类评估与新的毒性基准相结合,涵盖了Flores-200的所有语言,以评估翻译安全性。我们的模型相对于先前的最新技术,实现了44%BLEU的改善,为实现通用翻译系统奠定了重要的基础。最后,我们开源此工作中描述的所有贡献,可在https://github.com/facebookresearch/fairseq/tree/nllb上访问。
Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system. Finally, we open source all contributions described in this work, accessible at https://github.com/facebookresearch/fairseq/tree/nllb.