论文标题
gluefl:调解客户采样和模型屏蔽,用于带宽有效的联合学习
GlueFL: Reconciling Client Sampling and Model Masking for Bandwidth Efficient Federated Learning
论文作者
论文摘要
联合学习(FL)是一种有效的技术,可以直接涉及机器学习培训中的边缘设备,同时保留客户隐私。但是,当边缘设备的网络带宽有限时,FL的大量沟通开销使训练具有挑战性。现有的工作以优化FL带宽可俯瞰下游传输,并且不考虑FL客户端采样。 在本文中,我们提出了GLUEFL,该框架结合了新的客户端采样和模型压缩算法,以减轻佛罗里达州客户端的低下载带宽。 Gluefl优先考虑最近使用的客户端,并在每轮压缩面罩中更改位置的数量范围。在三个流行的FL数据集和三种最先进的策略中,GlueFL平均将下游客户带宽减少27%,并将培训时间平均减少29%。
Federated learning (FL) is an effective technique to directly involve edge devices in machine learning training while preserving client privacy. However, the substantial communication overhead of FL makes training challenging when edge devices have limited network bandwidth. Existing work to optimize FL bandwidth overlooks downstream transmission and does not account for FL client sampling. In this paper we propose GlueFL, a framework that incorporates new client sampling and model compression algorithms to mitigate low download bandwidths of FL clients. GlueFL prioritizes recently used clients and bounds the number of changed positions in compression masks in each round. Across three popular FL datasets and three state-of-the-art strategies, GlueFL reduces downstream client bandwidth by 27% on average and reduces training time by 29% on average.