论文标题

SSD-SSD:分布式深度学习培训的沟通稀疏

SSD-SSD: Communication sparsification for distributed deep learning training

论文作者

Xu, Yemao, Dong, Dezun, Zhao, Yawei, Xu, Weixia, Liao, Xiangke

论文摘要

梯度和参数的密集沟通和同步成本是分布式深度学习培训的众所周知的瓶颈。基于同步SGD(SSGD)获得良好的收敛精度的观察结果,而异步SGD(ASGD)可以提供更快的原始训练速度,我们提出了多个步骤延迟SGD(SSD-SGD)来结合其优点,旨在通过通信通过通信进行掌握来解决通信的启动,以通过稀疏性来处理通信。 SSD-SGD探索了参数服务器中的全局同步更新,又探讨了每个周期性迭代中工人中的异步本地更新。周期性和灵活的同步使SSD-SGD具有良好的收敛精度和快速训练速度。据我们所知,我们在同步质量和沟通稀疏之间取得了新的平衡,并提高准确性和训练速度之间的权衡。具体而言,SSD-SGD的核心组件包括适当的热身阶段,步骤延迟阶段以及我们的新型全球梯度局部更新算法(GLU)。 GLU对于本地更新操作至关重要,可以有效补偿延迟的本地重量。此外,我们在MXNET框架上实现了SSD-SGD,并通过CIFAR-10和Imagenet数据集对其性能进行了全面评估。实验结果表明,在不同的实验配置下,SSD-SGD可以加速分布式训练速度,高达110%,同时达到良好的收敛精度。

Intensive communication and synchronization cost for gradients and parameters is the well-known bottleneck of distributed deep learning training. Based on the observations that Synchronous SGD (SSGD) obtains good convergence accuracy while asynchronous SGD (ASGD) delivers a faster raw training speed, we propose Several Steps Delay SGD (SSD-SGD) to combine their merits, aiming at tackling the communication bottleneck via communication sparsification. SSD-SGD explores both global synchronous updates in the parameter servers and asynchronous local updates in the workers in each periodic iteration. The periodic and flexible synchronization makes SSD-SGD achieve good convergence accuracy and fast training speed. To the best of our knowledge, we strike the new balance between synchronization quality and communication sparsification, and improve the trade-off between accuracy and training speed. Specifically, the core components of SSD-SGD include proper warm-up stage, steps delay stage, and our novel algorithm of global gradient for local update (GLU). GLU is critical for local update operations to effectively compensate the delayed local weights. Furthermore, we implement SSD-SGD on MXNet framework and comprehensively evaluate its performance with CIFAR-10 and ImageNet datasets. Experimental results show that SSD-SGD can accelerate distributed training speed under different experimental configurations, by up to 110%, while achieving good convergence accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源