论文标题

联合复合优化

Federated Composite Optimization

论文作者

Yuan, Honglin, Zaheer, Manzil, Reddi, Sashank

论文摘要

联合学习(FL)是一个分布式学习范式,可协作和私人地扩展设备学习。标准的FL算法(例如FedAvg)主要针对平滑的不受约束的设置。在本文中,我们研究了联合复合优化(FCO)问题,其中损失函数包含非平滑正常器。这些问题自然出现在涉及稀疏性,低阶,单调性或更一般限制的FL应用中。我们首先表明,像FedAvg这样的原始算法的直接扩展不适合FCO,因为它们遭受了“原始平均的诅咒”,导致收敛性不佳。作为解决方案,我们提出了一种新的原始偶偶有算法,联合双重平均(FedDualAvg),该算法通过采用新颖的服务器双重平均过程来规避原始平均的诅咒。我们的理论分析和经验实验表明,FedDualAvg优于其他基线。

Federated Learning (FL) is a distributed learning paradigm that scales on-device learning collaboratively and privately. Standard FL algorithms such as FedAvg are primarily geared towards smooth unconstrained settings. In this paper, we study the Federated Composite Optimization (FCO) problem, in which the loss function contains a non-smooth regularizer. Such problems arise naturally in FL applications that involve sparsity, low-rank, monotonicity, or more general constraints. We first show that straightforward extensions of primal algorithms such as FedAvg are not well-suited for FCO since they suffer from the "curse of primal averaging," resulting in poor convergence. As a solution, we propose a new primal-dual algorithm, Federated Dual Averaging (FedDualAvg), which by employing a novel server dual averaging procedure circumvents the curse of primal averaging. Our theoretical analysis and empirical experiments demonstrate that FedDualAvg outperforms the other baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源