论文标题

Pipefisher:使用管道和Fisher信息矩阵对大语言模型有效培训

PipeFisher: Efficient Training of Large Language Models Using Pipelining and Fisher Information Matrices

论文作者

Osawa, Kazuki, Li, Shigang, Hoefler, Torsten

论文摘要

管道并行性可以在大规模分布式加速器群集上对大语言模型(LLM)进行有效培训。然而,启动和拆除期间的管道气泡减少了加速器的利用。尽管已经提出了具有微批量和双向管道的有效管道方案以最大程度地利用利用,但是使用同步向前和向后传递,不能填充大量的气泡。为了解决这个问题,我们建议将额外的工作分配给泡沫,以在LLM培训中获得辅助益处。作为在这个方向上的一个例子,我们提出了PipeFisher,它为基于Fisher Information矩阵的K-FAC的工作分配了K-FAC的工作,以加速收敛。在BERT基碱和长期模型的第1阶段预处理中,PipeFisher通过极大地改善加速器利用率并受益于K-FAC改善的收敛性,将(模拟的)训练时间降低到50-75%。

Pipeline parallelism enables efficient training of Large Language Models (LLMs) on large-scale distributed accelerator clusters. Yet, pipeline bubbles during startup and tear-down reduce the utilization of accelerators. Although efficient pipeline schemes with micro-batching and bidirectional pipelines have been proposed to maximize utilization, a significant number of bubbles cannot be filled using synchronous forward and backward passes. To address this problem, we suggest that extra work be assigned to the bubbles to gain auxiliary benefits in LLM training. As an example in this direction, we propose PipeFisher, which assigns the work of K-FAC, a second-order optimization method based on the Fisher information matrix, to the bubbles to accelerate convergence. In Phase 1 pretraining of BERT-Base and -Large models, PipeFisher reduces the (simulated) training time to 50-75% compared to training with a first-order optimizer by greatly improving the accelerator utilization and benefiting from the improved convergence by K-FAC.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源