论文标题

通过元学习为预训练的语言模型奠定多个培训目标

Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning

论文作者

Wu, Hongqiu, Ding, Ruixue, Zhao, Hai, Chen, Boli, Xie, Pengjun, Huang, Fei, Zhang, Min

论文摘要

多个训练的目标填补了单瞄准语言建模的理解能力的空缺,后者是预训练的语言模型(PRLMS)的最终目的,在大量场景上概括了。但是,由于未知的相对意义以及它们之间的潜在矛盾,在单个模型中学习多个培训目标是具有挑战性的。经验研究表明,在临时手动设置中的当前客观采样使学习的语言表示几乎不会融合到所需的最佳效果。因此,我们提出\ textit {mometas},这是一种基于元学习的新颖自适应采样器,它在任意培训的目标上了解了潜在的采样模式。这样的设计轻巧,额外的额外培训可忽略不计。为了验证我们的方法,我们采用了五个目标,并通过Bert-Base和Bert-Large模型进行持续的预训练,其中Mometas在14个自然语言处理任务上表现出了与其他基于规则的采样策略相比的普遍绩效增长。

Multiple pre-training objectives fill the vacancy of the understanding capability of single-objective language modeling, which serves the ultimate purpose of pre-trained language models (PrLMs), generalizing well on a mass of scenarios. However, learning multiple training objectives in a single model is challenging due to the unknown relative significance as well as the potential contrariety between them. Empirical studies have shown that the current objective sampling in an ad-hoc manual setting makes the learned language representation barely converge to the desired optimum. Thus, we propose \textit{MOMETAS}, a novel adaptive sampler based on meta-learning, which learns the latent sampling pattern on arbitrary pre-training objectives. Such a design is lightweight with negligible additional training overhead. To validate our approach, we adopt five objectives and conduct continual pre-training with BERT-base and BERT-large models, where MOMETAS demonstrates universal performance gain over other rule-based sampling strategies on 14 natural language processing tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源