论文标题

您所需要的长期梯度历史:在联邦学习中发现恶意和不可靠的客户

Long-Short History of Gradients is All You Need: Detecting Malicious and Unreliable Clients in Federated Learning

论文作者

Gupta, Ashish, Luo, Tie, Ngo, Mao V., Das, Sajal K.

论文摘要

联邦学习提供了一个以分布式方式培训机器学习模型的框架,同时保留了参与者的隐私。由于服务器无法管理客户的操作,因此邪恶的客户可以通过发送恶意的本地梯度来攻击全局模型。同时,也可能有不可靠的客户端,但每个客户都有一部分低质量培训数据(例如,模糊或低分辨率图像),因此看起来与恶意客户相似。因此,防御机制将需要执行三倍的差异化,这比常规(两倍)案例更具挑战性。本文介绍了一种新颖的防御算法Mud-Hog,该算法在使用长期梯度的历史上解决了联盟学习的这一挑战,并以不同的方式对待被发现的恶意和不可靠的客户。不仅如此,我们还可以区分恶意客户的目标和非目标攻击,这与大多数先前的作品仅考虑一种类型的攻击。具体而言,在非IID设置下,我们考虑了弹簧弹牌,添加剂噪声,贴标签和多标签式攻击。我们在两个数据集上使用六种最先进的方法评估泥浆。结果表明,在存在多种(四个)类型的攻击者以及不可靠的客户的混合物的情况下,泥浆的表现都超过了所有这些。此外,与大多数以前的作品不同,只能容忍低害处的人群,泥浆可以与总体人口合作并成功地检测到广泛的恶意和不可靠的客户 - 总人口的47.5%和10%。我们的代码在https://github.com/labsaint/mud-hog_federated_learning上开源。

Federated learning offers a framework of training a machine learning model in a distributed fashion while preserving privacy of the participants. As the server cannot govern the clients' actions, nefarious clients may attack the global model by sending malicious local gradients. In the meantime, there could also be unreliable clients who are benign but each has a portion of low-quality training data (e.g., blur or low-resolution images), thus may appearing similar as malicious clients. Therefore, a defense mechanism will need to perform a three-fold differentiation which is much more challenging than the conventional (two-fold) case. This paper introduces MUD-HoG, a novel defense algorithm that addresses this challenge in federated learning using long-short history of gradients, and treats the detected malicious and unreliable clients differently. Not only this, but we can also distinguish between targeted and untargeted attacks among malicious clients, unlike most prior works which only consider one type of the attacks. Specifically, we take into account sign-flipping, additive-noise, label-flipping, and multi-label-flipping attacks, under a non-IID setting. We evaluate MUD-HoG with six state-of-the-art methods on two datasets. The results show that MUD-HoG outperforms all of them in terms of accuracy as well as precision and recall, in the presence of a mixture of multiple (four) types of attackers as well as unreliable clients. Moreover, unlike most prior works which can only tolerate a low population of harmful users, MUD-HoG can work with and successfully detect a wide range of malicious and unreliable clients - up to 47.5% and 10%, respectively, of the total population. Our code is open-sourced at https://github.com/LabSAINT/MUD-HoG_Federated_Learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源