论文标题

FedNet2NET:随着模型的增长,保存联合学习的沟通和计算

FedNet2Net: Saving Communication and Computations in Federated Learning with Model Growing

论文作者

Kundu, Amit Kumar, Jaja, Joseph

论文摘要

联合学习(FL)是最近开发的机器学习领域,其中大量分布式客户端的私人数据用于在中央服务器的协调下开发全球模型,而无需明确暴露数据。标准的FL策略具有许多重要的瓶颈,包括庞大的沟通要求和对客户资源的高影响。文献中已经描述了一些试图解决这些问题的策略。在本文中,提出了基于“模型生长”概念的新方案。最初,服务器部署了一个低复杂性的小型模型,该模型经过训练,可以在初始回合中捕获数据复杂性。当这种模型的性能饱和时,服务器会借助具有功能的变换切换到较大的模型。随着客户处理更多数据的处理,模型的复杂性会增加,并且整体过程一直持续到达到所需的性能为止。因此,最复杂的模型仅在我们的方法的最后阶段进行广播,从而大大降低了通信成本和客户计算要求。提出的方法对三个标准基准进行了广泛的测试,并显示出与当前最有效的策略相比,可以大大降低通信和客户计算,同时达到可比的准确性。

Federated learning (FL) is a recently developed area of machine learning, in which the private data of a large number of distributed clients is used to develop a global model under the coordination of a central server without explicitly exposing the data. The standard FL strategy has a number of significant bottlenecks including large communication requirements and high impact on the clients' resources. Several strategies have been described in the literature trying to address these issues. In this paper, a novel scheme based on the notion of "model growing" is proposed. Initially, the server deploys a small model of low complexity, which is trained to capture the data complexity during the initial set of rounds. When the performance of such a model saturates, the server switches to a larger model with the help of function-preserving transformations. The model complexity increases as more data is processed by the clients, and the overall process continues until the desired performance is achieved. Therefore, the most complex model is broadcast only at the final stage in our approach resulting in substantial reduction in communication cost and client computational requirements. The proposed approach is tested extensively on three standard benchmarks and is shown to achieve substantial reduction in communication and client computation while achieving comparable accuracy when compared to the current most effective strategies.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源