论文标题
具有灵活控制的联合学习
Federated Learning with Flexible Control
论文作者
论文摘要
联合学习(FL)可以从用户收集的本地数据中启用分布式模型培训。在具有限制资源和潜在高动态的分布式系统中,例如移动边缘网络,FL的效率是一个重要的问题。现有作品已分别考虑了不同的配置,以提高FL效率,例如模型更新,客户端采样和更新向量的压缩的不经常传输。但是,一个重要的开放问题是如何在单个FL算法中共同应用和调整这些控制旋钮,以通过允许高度自由控制决策来实现最佳性能。在本文中,我们解决了这个问题,并提出了FlexFL-一种具有多种选项的FL算法,可以灵活地调整。我们的FlexFL算法允许客户与服务器之间的任意局部计算率和任意通信量,从而使计算和通信资源消耗可调节。我们证明了该算法的收敛上限。基于此结果,我们进一步提出了一种随机优化公式和算法,以确定(大约)最小化收敛结合的控制决策,同时符合与资源消耗相关的约束。我们的方法的优点也可以使用实验验证。
Federated learning (FL) enables distributed model training from local data collected by users. In distributed systems with constrained resources and potentially high dynamics, e.g., mobile edge networks, the efficiency of FL is an important problem. Existing works have separately considered different configurations to make FL more efficient, such as infrequent transmission of model updates, client subsampling, and compression of update vectors. However, an important open problem is how to jointly apply and tune these control knobs in a single FL algorithm, to achieve the best performance by allowing a high degree of freedom in control decisions. In this paper, we address this problem and propose FlexFL - an FL algorithm with multiple options that can be adjusted flexibly. Our FlexFL algorithm allows both arbitrary rates of local computation at clients and arbitrary amounts of communication between clients and the server, making both the computation and communication resource consumption adjustable. We prove a convergence upper bound of this algorithm. Based on this result, we further propose a stochastic optimization formulation and algorithm to determine the control decisions that (approximately) minimize the convergence bound, while conforming to constraints related to resource consumption. The advantage of our approach is also verified using experiments.