论文标题
学习更好的语言模型预训练的更好掩盖
Learning Better Masking for Better Language Model Pre-training
论文作者
论文摘要
蒙版语言建模(MLM)已被广泛用作培训前语言模型(PRLMS)中的降级目标。现有的PRLMS通常采用随机掩盖策略,在该策略中应用固定掩蔽率,并且在整个培训中都有均等的概率掩盖了不同的内容。但是,该模型可能会受到训练前状态的复杂影响,随着训练时间的发展,该状态会发生相应的变化。在本文中,我们表明,这种时间不变的MLM设置对掩盖比和掩盖内容不太可能提供最佳结果,这激发了我们探索时间变化的MLM设置的影响。我们提出了两种计划的掩蔽方法,以在不同的训练阶段适应掩盖比和掩盖内容,从而提高了对下游任务验证的预训练效率和有效性。我们的工作是一项关于比率和内容的时间变化掩盖策略的先驱研究,可以更好地了解掩盖比率和掩盖内容如何影响MLM的MLM预训练。
Masked Language Modeling (MLM) has been widely used as the denoising objective in pre-training language models (PrLMs). Existing PrLMs commonly adopt a Random-Token Masking strategy where a fixed masking ratio is applied and different contents are masked by an equal probability throughout the entire training. However, the model may receive complicated impact from pre-training status, which changes accordingly as training time goes on. In this paper, we show that such time-invariant MLM settings on masking ratio and masked content are unlikely to deliver an optimal outcome, which motivates us to explore the influence of time-variant MLM settings. We propose two scheduled masking approaches that adaptively tune the masking ratio and masked content in different training stages, which improves the pre-training efficiency and effectiveness verified on the downstream tasks. Our work is a pioneer study on time-variant masking strategy on ratio and content and gives a better understanding of how masking ratio and masked content influence the MLM pre-training.