论文标题

通过缓冲知识蒸馏,联合学习及其解决方案中的边缘偏差

Edge Bias in Federated Learning and its Solution by Buffered Knowledge Distillation

论文作者

Lee, Sangho, Yoo, Kiyoon, Kwak, Nojun

论文摘要

联合学习(FL)利用服务器(核心)和本地设备(边缘)之间的通信间接地从更多数据中学习,这是深度学习研究的新兴领域。最近,已经提出了具有显着性能和高适用性的基于知识蒸馏的FL方法。在本文中,我们选择基于知识蒸馏的FL方法作为我们的基准,并解决了使用这些方法所带来的一个具有挑战性的问题。特别是,我们专注于试图模拟不同数据集的服务器模型中产生的问题,每个数据集都是单个边缘设备所独有的。我们将问题“边缘偏差”配音为“边缘偏见”,这是在单独使用在不同数据集上训练的多个教师模型来提炼知识时发生的。我们介绍了在FL的某些情况下发生的这种滋扰,为了减轻它,我们提出了一种简单而有效的蒸馏计划,称为“缓冲蒸馏”。此外,我们还通过实验表明,该方案可有效缓解延迟边缘引起的散乱问题。

Federated learning (FL), which utilizes communication between the server (core) and local devices (edges) to indirectly learn from more data, is an emerging field in deep learning research. Recently, Knowledge Distillation-based FL methods with notable performance and high applicability have been suggested. In this paper, we choose knowledge distillation-based FL method as our baseline and tackle a challenging problem that ensues from using these methods. Especially, we focus on the problem incurred in the server model that tries to mimic different datasets, each of which is unique to an individual edge device. We dub the problem 'edge bias', which occurs when multiple teacher models trained on different datasets are used individually to distill knowledge. We introduce this nuisance that occurs in certain scenarios of FL, and to alleviate it, we propose a simple yet effective distillation scheme named 'buffered distillation'. In addition, we also experimentally show that this scheme is effective in mitigating the straggler problem caused by delayed edges.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源