论文标题
用于学习全球和个性化模型的联合块协调下降方案
Federated Block Coordinate Descent Scheme for Learning Global and Personalized Models
论文作者
论文摘要
在联合学习中,通过在服务提供商的“云”中汇总以获取全局模型的方式,从用户数据中学到了模型。这种全球模型具有巨大的商业价值,例如改善客户的体验。在本文中,我们着重于两个可能改善最新技术的领域。首先,我们考虑了用户习惯之间的区别,并提出了一个基于二次惩罚的配方,以有效学习允许个性化本地模型的全球模型。其次,我们通过利用层次结构建模通信不仅在云和边缘设备之间,而且在云中之间的层次结构进行建模,解决了与边缘设备上的异质训练时间相关的延迟问题。具体而言,我们设计了一个量身定制的基于下降的计算方案,并伴随着同步和异步云设置的通信协议。我们表征了算法的理论收敛速率,并提供了一种在经验上表现更好的变体。我们还证明,与同步设置相比,当边缘设备更新是间歇性时,异步方案的启发,具有多代理共识技术的启发。最后,提供了实验结果,不仅证实了该理论,而且还表明该系统会导致边缘设备上个性化模型的更快收敛性,与艺术的状态相比。
In federated learning, models are learned from users' data that are held private in their edge devices, by aggregating them in the service provider's "cloud" to obtain a global model. Such global model is of great commercial value in, e.g., improving the customers' experience. In this paper we focus on two possible areas of improvement of the state of the art. First, we take the difference between user habits into account and propose a quadratic penalty-based formulation, for efficient learning of the global model that allows to personalize local models. Second, we address the latency issue associated with the heterogeneous training time on edge devices, by exploiting a hierarchical structure modeling communication not only between the cloud and edge devices, but also within the cloud. Specifically, we devise a tailored block coordinate descent-based computation scheme, accompanied with communication protocols for both the synchronous and asynchronous cloud settings. We characterize the theoretical convergence rate of the algorithm, and provide a variant that performs empirically better. We also prove that the asynchronous protocol, inspired by multi-agent consensus technique, has the potential for large gains in latency compared to a synchronous setting when the edge-device updates are intermittent. Finally, experimental results are provided that corroborate not only the theory, but also show that the system leads to faster convergence for personalized models on the edge devices, compared to the state of the art.