论文标题

事实证明,双重加速的联合学习:理论上成功的本地培训和交流压缩的结合

Provably Doubly Accelerated Federated Learning: The First Theoretically Successful Combination of Local Training and Communication Compression

论文作者

Condat, Laurent, Agarský, Ivan, Richtárik, Peter

论文摘要

在联合学习中,许多用户以协作方式参与了全球学习任务。他们与遥远的精心策划服务器交替进行本地计算和双向通信。在这种情况下,沟通可能会缓慢而昂贵,这是主要的瓶颈。为了减少沟通负荷并加速分布式梯度下降,两种策略很受欢迎:1)沟通较少;也就是说,在通信回合之间执行几次局部计算的迭代; 2)传达压缩信息,而不是全维向量。我们提出了第一种用于分布式优化和联合学习的算法,该算法共同利用这两种策略,并在强烈凸起的设置中线性收敛到精确的解决方案,并具有双重加速的速率:我们的算法益处是从通过训练和压缩机制提供的两个加速机制中的算法益处。

In federated learning, a large number of users are involved in a global learning task, in a collaborative way. They alternate local computations and two-way communication with a distant orchestrating server. Communication, which can be slow and costly, is the main bottleneck in this setting. To reduce the communication load and therefore accelerate distributed gradient descent, two strategies are popular: 1) communicate less frequently; that is, perform several iterations of local computations between the communication rounds; and 2) communicate compressed information instead of full-dimensional vectors. We propose the first algorithm for distributed optimization and federated learning, which harnesses these two strategies jointly and converges linearly to an exact solution in the strongly convex setting, with a doubly accelerated rate: our algorithm benefits from the two acceleration mechanisms provided by local training and compression, namely a better dependency on the condition number of the functions and on the dimension of the model, respectively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源