论文标题

更严格的遗憾分析和优化在线联合学习

Tighter Regret Analysis and Optimization of Online Federated Learning

论文作者

Kwon, Dohyeok, Park, Jonghwan, Hong, Songnam

论文摘要

在联合学习(FL)中,通常假定所有数据都放在机器学习(ML)优化(即离线学习)的开始时。但是,在许多现实世界中,预计将以在线方式进行。为此,已经引入了在线FL(OFL),旨在从分散的流数据中学习一系列全球模型,以使所谓的累积遗憾最小化。在此框架中,将在线梯度下降和平均模型结合在一起,将FedOGD构建为FLESGD的同行。虽然它可以享受最佳的均匀遗憾,但Fedogd承受着巨大的沟通成本。在本文中,我们通过间歇传输(由客户端采样和定期传输启用)和量化提出了一种通信效率方法(名为Ofediq)。我们第一次得出了遗憾的束缚,以捕捉数据杂种性和沟通效率技术的影响。通过此,我们有效地优化了OFEDIQ的参数,例如采样率,传输周期和量化水平。同样,事实证明,经过优化的EDIQ可以渐近地实现FedOGD的性能,同时将通信成本降低99%。通过使用真实数据集的实验,我们证明了优化的OFEDIQ的有效性。

In federated learning (FL), it is commonly assumed that all data are placed at clients in the beginning of machine learning (ML) optimization (i.e., offline learning). However, in many real-world applications, it is expected to proceed in an online fashion. To this end, online FL (OFL) has been introduced, which aims at learning a sequence of global models from decentralized streaming data such that the so-called cumulative regret is minimized. Combining online gradient descent and model averaging, in this framework, FedOGD is constructed as the counterpart of FedSGD in FL. While it can enjoy an optimal sublinear regret, FedOGD suffers from heavy communication costs. In this paper, we present a communication-efficient method (named OFedIQ) by means of intermittent transmission (enabled by client subsampling and periodic transmission) and quantization. For the first time, we derive the regret bound that captures the impact of data-heterogeneity and the communication-efficient techniques. Through this, we efficiently optimize the parameters of OFedIQ such as sampling rate, transmission period, and quantization levels. Also, it is proved that the optimized OFedIQ can asymptotically achieve the performance of FedOGD while reducing the communication costs by 99%. Via experiments with real datasets, we demonstrate the effectiveness of the optimized OFedIQ.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源