论文标题

是否个性化:动态个性化的联合学习和激励措施

Personalizing or Not: Dynamically Personalized Federated Learning with Incentives

论文作者

Ma, Zichen, Lu, Yu, Li, Wenye, Cui, Shuguang

论文摘要

个性化联合学习(FL)促进了多个客户之间的合作,以学习个性化模型而无需共享私人数据。该机制减轻系统中通常遇到的统计异质性,即不同客户端的非IID数据。现有的个性化算法通常假设所有客户都会自愿进行个性化。但是,潜在的参与者可能仍然不愿个性化模型,因为他们可能无法正常工作。在这种情况下,客户选择使用全局模型。为了避免做出不切实际的假设,我们介绍了个性化率,该个性化率是愿意培训个性化模型,将其介入联合设置并提出DYPFL的客户的一部分。这种动态个性化的FL技术激励客户参与个性化本地模型,同时允许在全球模型表现更好时采用全球模型。我们表明,DYPFL中的算法管道可以保证良好的收敛性能,从而使其在广泛的条件下胜过替代性个性化方法,包括异质性,客户次数,本地时期和批量尺寸的变化。

Personalized federated learning (FL) facilitates collaborations between multiple clients to learn personalized models without sharing private data. The mechanism mitigates the statistical heterogeneity commonly encountered in the system, i.e., non-IID data over different clients. Existing personalized algorithms generally assume all clients volunteer for personalization. However, potential participants might still be reluctant to personalize models since they might not work well. In this case, clients choose to use the global model instead. To avoid making unrealistic assumptions, we introduce the personalization rate, measured as the fraction of clients willing to train personalized models, into federated settings and propose DyPFL. This dynamically personalized FL technique incentivizes clients to participate in personalizing local models while allowing the adoption of the global model when it performs better. We show that the algorithmic pipeline in DyPFL guarantees good convergence performance, allowing it to outperform alternative personalized methods in a broad range of conditions, including variation in heterogeneity, number of clients, local epochs, and batch sizes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源