论文标题
抗勾结的联邦学习和遗忘的分布式差异隐私
Collusion Resistant Federated Learning with Oblivious Distributed Differential Privacy
论文作者
论文摘要
隐私的联合学习使一个分布式客户的人群能够共同学习共享的模型,同时甚至从不受信任的服务器中保持客户培训数据。先前的工作不提供有效的解决方案,以防止当事方协作以暴露诚实客户的模型参数。我们提出了一种有效的机制,基于遗忘的分布式差异隐私,该隐私是第一个防止此类客户勾结的机制,包括“ SYBIL”攻击,在该攻击中,服务器优先选择损坏的设备或模拟伪造的设备。我们利用新颖的隐私机制来构建安全的联合学习协议,并证明该协议的安全性。我们在对5,000个分布式网络客户端的现实模拟中的两个数据集上对协议的执行速度,学习准确性和隐私性能的经验分析结束。
Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model while keeping client training data private, even from an untrusted server. Prior works do not provide efficient solutions that protect against collusion attacks in which parties collaborate to expose an honest client's model parameters. We present an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion, including the "Sybil" attack in which a server preferentially selects compromised devices or simulates fake devices. We leverage the novel privacy mechanism to construct a secure federated learning protocol and prove the security of that protocol. We conclude with empirical analysis of the protocol's execution speed, learning accuracy, and privacy performance on two data sets within a realistic simulation of 5,000 distributed network clients.