论文标题

基于年龄的编码计算,以减少分布式学习的偏差

Age-Based Coded Computation for Bias Reduction in Distributed Learning

论文作者

Ozfatura, Emre, Buyukates, Baturalp, Gunduz, Deniz, Ulukus, Sennur

论文摘要

编码的计算可用于在散乱的工人的存在下加快分布式学习的速度。梯度向量的部分恢复可以进一步减少每次迭代时的计算时间;但是,这可能导致估计器有偏见,这可能会减慢收敛性,甚至会导致差异。随着时间的流逝,散布行为相关时,估计量偏差将特别普遍,这导致梯度估计器由几个快速服务器主导。为了减轻有偏见的估计器,我们设计了一个$及时的$动态编码框架,用于部分恢复,其中包括一个订购操作员,该订购操作员随着时间的推移更改工人的代码字和计算订单。为了调节恢复频率,我们在动态编码方案的设计中采用了$ age $公制。我们通过数值结果表明,提出的动态编码策略增加了恢复的计算的及时性,结果,这减少了模型更新中的偏差,并加速了与常规的静态部分恢复方案相比的收敛性。

Coded computation can be used to speed up distributed learning in the presence of straggling workers. Partial recovery of the gradient vector can further reduce the computation time at each iteration; however, this can result in biased estimators, which may slow down convergence, or even cause divergence. Estimator bias will be particularly prevalent when the straggling behavior is correlated over time, which results in the gradient estimators being dominated by a few fast servers. To mitigate biased estimators, we design a $timely$ dynamic encoding framework for partial recovery that includes an ordering operator that changes the codewords and computation orders at workers over time. To regulate the recovery frequencies, we adopt an $age$ metric in the design of the dynamic encoding scheme. We show through numerical results that the proposed dynamic encoding strategy increases the timeliness of the recovered computations, which as a result, reduces the bias in model updates, and accelerates the convergence compared to the conventional static partial recovery schemes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源