论文标题
FLCERT:证明是安全的联邦学习免受中毒攻击
FLCert: Provably Secure Federated Learning against Poisoning Attacks
论文作者
论文摘要
由于其分布性质,联邦学习很容易受到中毒攻击的影响,在这种攻击中,恶意客户通过操纵发送到云服务器的当地培训数据和/或本地模型更新而毒害培训过程,因此中毒的全球模型误以为许多不分毒的测试输入或选择攻击者选择的人。现有的防御物主要利用拜占庭式联合学习方法或检测恶意客户。但是,这些防御能力没有可证明的安全保证,可以防止中毒攻击,并且可能容易受到更高级的攻击。在这项工作中,我们旨在通过提议联合学习框架FLCERT提出FLCERT来弥合差距,这证明可以安全地与有限数量的恶意客户进行中毒攻击。我们的关键想法是将客户分为组,使用任何现有联合学习方法为每个客户学习一个全球模型,并在全球模型中进行多数投票以对测试输入进行分类。具体而言,我们考虑了两种方法来对客户端进行分组,并相应地提出了两个FLCERT的变体,即FLCERT-P随机示例每个组中的客户端,而FLCERT-D则将客户端分配给确定性的脱节组。我们在多个数据集上进行的广泛实验表明,无论我们使用哪种中毒攻击,我们的FLCERT预测的测试输入预测的标签都不会受到界定的恶意客户的影响。
Due to its distributed nature, federated learning is vulnerable to poisoning attacks, in which malicious clients poison the training process via manipulating their local training data and/or local model updates sent to the cloud server, such that the poisoned global model misclassifies many indiscriminate test inputs or attacker-chosen ones. Existing defenses mainly leverage Byzantine-robust federated learning methods or detect malicious clients. However, these defenses do not have provable security guarantees against poisoning attacks and may be vulnerable to more advanced attacks. In this work, we aim to bridge the gap by proposing FLCert, an ensemble federated learning framework, that is provably secure against poisoning attacks with a bounded number of malicious clients. Our key idea is to divide the clients into groups, learn a global model for each group of clients using any existing federated learning method, and take a majority vote among the global models to classify a test input. Specifically, we consider two methods to group the clients and propose two variants of FLCert correspondingly, i.e., FLCert-P that randomly samples clients in each group, and FLCert-D that divides clients to disjoint groups deterministically. Our extensive experiments on multiple datasets show that the label predicted by our FLCert for a test input is provably unaffected by a bounded number of malicious clients, no matter what poisoning attacks they use.