论文标题

多模型联合学习

Multi-Model Federated Learning

论文作者

Bhuyan, Neelkamal, Moharir, Sharayu

论文摘要

联合学习是分布式学习的一种形式,主要挑战是参与客户中数据的非相同分布性质。在本文中,我们将联合学习扩展到同时训练多个无关模型的环境。具体来说,每个客户端都可以一次训练M型号的任何一个M型号,并且服务器维护每个M型号的模型,这通常是客户计算的模型的合适平均版本。我们提出了多种政策,以随着时间的推移将学习任务分配给客户。在第一个政策中,我们通过将模型分配给I.I.D.随机方式。此外,我们在多模型联合设置中提出了两项​​用于客户选择的新策略,这些策略基于每个客户模型对的当前本地损失做出决策。我们比较策略在涉及合成和现实世界数据的任务上的绩效,并表征了拟议的策略的绩效。我们工作的关键是,提议的多模型策略的表现更好或至少与使用FedAvg的单个模型培训一样好。

Federated learning is a form of distributed learning with the key challenge being the non-identically distributed nature of the data in the participating clients. In this paper, we extend federated learning to the setting where multiple unrelated models are trained simultaneously. Specifically, every client is able to train any one of M models at a time and the server maintains a model for each of the M models which is typically a suitably averaged version of the model computed by the clients. We propose multiple policies for assigning learning tasks to clients over time. In the first policy, we extend the widely studied FedAvg to multi-model learning by allotting models to clients in an i.i.d. stochastic manner. In addition, we propose two new policies for client selection in a multi-model federated setting which make decisions based on current local losses for each client-model pair. We compare the performance of the policies on tasks involving synthetic and real-world data and characterize the performance of the proposed policies. The key take-away from our work is that the proposed multi-model policies perform better or at least as good as single model training using FedAvg.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源