论文标题
通过一阶模型优化的个性化联合学习
Personalized Federated Learning with First Order Model Optimization
论文作者
论文摘要
尽管联邦学习传统上旨在通过分散的本地数据集培训单个全球模型,但对于所有参与的客户来说,一个模型可能并不总是理想的。在这里,我们提出了一个替代方案,每个客户仅与其他相关客户端联合以获得每个客户特定目标的模型。为了实现这种个性化,我们没有像传统的FL一样计算整个联邦恒定权重的平均值,而是根据弄清楚客户可以从他人的模型中受益多少,为每个客户有效地计算每个客户的最佳加权模型组合。我们不假定任何基本数据分布或客户相似性的知识,而允许每个客户对兴趣的任意目标分布进行优化,从而实现更大的个性化灵活性。我们在各种联合设置,数据集和本地数据异质性的程度上评估和表征我们的方法。我们的方法的表现优于现有替代方案,同时还可以为个性化的FL提供新功能,例如在本地数据分布之外传输。
While federated learning traditionally aims to train a single global model across decentralized local datasets, one model may not always be ideal for all participating clients. Here we propose an alternative, where each client only federates with other relevant clients to obtain a stronger model per client-specific objectives. To achieve this personalization, rather than computing a single model average with constant weights for the entire federation as in traditional FL, we efficiently calculate optimal weighted model combinations for each client, based on figuring out how much a client can benefit from another's model. We do not assume knowledge of any underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest, enabling greater flexibility for personalization. We evaluate and characterize our method on a variety of federated settings, datasets, and degrees of local data heterogeneity. Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.