论文标题
通过异步分布式辍学的有效且轻巧的联合学习
Efficient and Light-Weight Federated Learning via Asynchronous Distributed Dropout
论文作者
论文摘要
异步学习方案最近引起了关注,尤其是在联合学习(FL)设置中,较慢的客户可以严重阻碍学习过程。本文中,我们提出了\ texttt {asyncdrop},这是一种新型异步FL框架,利用辍学的正则化来处理分布式设置中的设备异质性。总体而言,与最先行的方法相比,\ texttt {asyncdrop}的性能更好,同时导致沟通和培训时间较小。关键思想围绕着全球模型创建``子模型'',并根据设备异质性将培训分发给工人。我们严格地证明,这种方法可以是理论上可以表征的。我们实施了我们的方法并将其与其他异步基线进行比较,无论是设计还是通过将现有同步的FL算法调整为异步方案。从经验上讲,\ texttt {asyncdrop}减少了通信成本和培训时间,同时匹配或提高了不同的非i.i.d的最终测试准确性。 FL场景。
Asynchronous learning protocols have regained attention lately, especially in the Federated Learning (FL) setup, where slower clients can severely impede the learning process. Herein, we propose \texttt{AsyncDrop}, a novel asynchronous FL framework that utilizes dropout regularization to handle device heterogeneity in distributed settings. Overall, \texttt{AsyncDrop} achieves better performance compared to state of the art asynchronous methodologies, while resulting in less communication and training time overheads. The key idea revolves around creating ``submodels'' out of the global model, and distributing their training to workers, based on device heterogeneity. We rigorously justify that such an approach can be theoretically characterized. We implement our approach and compare it against other asynchronous baselines, both by design and by adapting existing synchronous FL algorithms to asynchronous scenarios. Empirically, \texttt{AsyncDrop} reduces the communication cost and training time, while matching or improving the final test accuracy in diverse non-i.i.d. FL scenarios.