论文标题
OD-SGD:分布式训练的一步延迟随机梯度下降
OD-SGD: One-step Delay Stochastic Gradient Descent for Distributed Training
论文作者
论文摘要
现代深度学习神经网络的培训要求大量计算,这通常由GPU或其他特定的加速器提供。为了达到更快的训练速度,将两种更新算法主要应用于分布式培训过程中,即同步SGD算法(SSGD)和异步SGD算法(ASGD)。 SSGD获得了良好的收敛点,而同步屏障将训练速度降低。 ASGD的训练速度更快,但与SSGD相比,收敛点较低。为了充分利用SSGD和ASGD的优势,我们提出了一项名为One-Step Delay SGD(OD-SGD)的新技术,以将其优势结合在训练过程中。因此,我们可以分别达到与SSGD和ASGD相似的收敛点和训练速度。据我们所知,我们首次尝试结合SSGD和ASGD的功能以提高分布式培训性能。 OD-SGD的每次迭代都包含参数服务器节点中的全局更新和工作节点中的本地更新,引入本地更新以更新和补偿延迟的本地权重。我们评估了有关MNIST,CIFAR-10和Imagenet数据集的提议算法。实验结果表明,OD-SGD可以获得比SSGD相似甚至更好的精度,而其训练速度却更快,甚至超过了ASGD的训练速度。
The training of modern deep learning neural network calls for large amounts of computation, which is often provided by GPUs or other specific accelerators. To scale out to achieve faster training speed, two update algorithms are mainly applied in the distributed training process, i.e. the Synchronous SGD algorithm (SSGD) and Asynchronous SGD algorithm (ASGD). SSGD obtains good convergence point while the training speed is slowed down by the synchronous barrier. ASGD has faster training speed but the convergence point is lower when compared to SSGD. To sufficiently utilize the advantages of SSGD and ASGD, we propose a novel technology named One-step Delay SGD (OD-SGD) to combine their strengths in the training process. Therefore, we can achieve similar convergence point and training speed as SSGD and ASGD separately. To the best of our knowledge, we make the first attempt to combine the features of SSGD and ASGD to improve distributed training performance. Each iteration of OD-SGD contains a global update in the parameter server node and local updates in the worker nodes, the local update is introduced to update and compensate the delayed local weights. We evaluate our proposed algorithm on MNIST, CIFAR-10 and ImageNet datasets. Experimental results show that OD-SGD can obtain similar or even slightly better accuracy than SSGD, while its training speed is much faster, which even exceeds the training speed of ASGD.