论文标题
通过分配转换解决联邦学习中的异质性
Addressing Heterogeneity in Federated Learning via Distributional Transformation
论文作者
论文摘要
联合学习(FL)允许多个客户协作培训深度学习模型。 FL的一个主要挑战是数据分布是异质的,即,一个客户之间的不同之处。现有的个性化FL算法仅适用于狭窄案例,例如每个客户端一个或两个数据类,因此在不同级别的数据异质性下,它们不会令人满意地解决FL。在本文中,我们提出了一个名为Distrans的新型框架,以通过火车和测试时间分配转换以及双输入通道模型结构来提高FL性能(即模型精度)。 Distrans通过优化每个FL客户端的分配偏移和模型来转移其数据分布,并在FL服务器上汇总这些偏移,以进一步提高性能,以进一步提高性能,以进一步提高分配异质性的性能。我们在多个基准数据集上的评估表明,在各种设置和不同程度的客户分布异质性下,发散者优于最先进的FL方法和数据增强方法。
Federated learning (FL) allows multiple clients to collaboratively train a deep learning model. One major challenge of FL is when data distribution is heterogeneous, i.e., differs from one client to another. Existing personalized FL algorithms are only applicable to narrow cases, e.g., one or two data classes per client, and therefore they do not satisfactorily address FL under varying levels of data heterogeneity. In this paper, we propose a novel framework, called DisTrans, to improve FL performance (i.e., model accuracy) via train and test-time distributional transformations along with a double-input-channel model structure. DisTrans works by optimizing distributional offsets and models for each FL client to shift their data distribution, and aggregates these offsets at the FL server to further improve performance in case of distributional heterogeneity. Our evaluation on multiple benchmark datasets shows that DisTrans outperforms state-of-the-art FL methods and data augmentation methods under various settings and different degrees of client distributional heterogeneity.