论文标题

自适应控制客户选择和梯度压缩,以进行有效的联合学习

Adaptive Control of Client Selection and Gradient Compression for Efficient Federated Learning

论文作者

Jiang, Zhida, Xu, Yang, Xu, Hongli, Wang, Zhiyuan, Qian, Chen

论文摘要

联合学习(FL)允许多个客户合作培训模型,而无需披露本地数据。但是,现有作品无法解决FL中的所有这些实际问题:有限的通信资源,动态网络条件和异质客户属性,这些属性减慢了FL的收敛性。为了应对上述挑战,我们提出了一个具有自适应客户选择和梯度压缩的异质性吸引的FL框架,称为FedCG。具体而言,参数服务器(PS)选择了一个代表性客户端集,考虑统计异质性并将全局模型发送给它们。在本地培训之后,这些选定的客户上传压缩模型更新将其功能与PS的汇总相匹配,以大大减轻通信负载并减轻Straggler效果。我们理论上分析了客户选择和梯度压缩对收敛性能的影响。在派生的收敛速率的指导下,我们开发了一种基于迭代的算法,以共同优化客户选择和压缩比决策,并使用suppodular最大化和线性编程。对现实世界原型和仿真的广泛实验表明,与其他方法相比,FedCG可以提供高达5.3 $ \ times $速度。

Federated learning (FL) allows multiple clients cooperatively train models without disclosing local data. However, the existing works fail to address all these practical concerns in FL: limited communication resources, dynamic network conditions and heterogeneous client properties, which slow down the convergence of FL. To tackle the above challenges, we propose a heterogeneity-aware FL framework, called FedCG, with adaptive client selection and gradient compression. Specifically, the parameter server (PS) selects a representative client subset considering statistical heterogeneity and sends the global model to them. After local training, these selected clients upload compressed model updates matching their capabilities to the PS for aggregation, which significantly alleviates the communication load and mitigates the straggler effect. We theoretically analyze the impact of both client selection and gradient compression on convergence performance. Guided by the derived convergence rate, we develop an iteration-based algorithm to jointly optimize client selection and compression ratio decision using submodular maximization and linear programming. Extensive experiments on both real-world prototypes and simulations show that FedCG can provide up to 5.3$\times$ speedup compared to other methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源