论文标题

在对等分散的机器学习的(在)安全性上

On the (In)security of Peer-to-Peer Decentralized Machine Learning

论文作者

Pasquini, Dario, Raynal, Mathilde, Troncoso, Carmela

论文摘要

在这项工作中,我们对分散学习进行了第一个深入的,深入的隐私分析 - 一个协作机器学习框架,旨在解决联合学习的主要局限性。我们为被动和主动分散的对手引入了一系列新颖的攻击。我们证明,与分散的学习建议者所主张的相反,分散的学习不能比联合学习提供任何安全优势。相反,它增加了攻击表面,使系统中的任何用户能够执行诸如梯度反转之类的隐私攻击,甚至可以完全控制诚实用户的本地模型。我们还表明,鉴于在保护方面的最新状态,分散学习的隐私配置需要完全连接的网络,因此对联合设置失去了任何实际优势,因此完全击败了分散方法的目标。

In this work, we carry out the first, in-depth, privacy analysis of Decentralized Learning -- a collaborative machine learning framework aimed at addressing the main limitations of federated learning. We introduce a suite of novel attacks for both passive and active decentralized adversaries. We demonstrate that, contrary to what is claimed by decentralized learning proposers, decentralized learning does not offer any security advantage over federated learning. Rather, it increases the attack surface enabling any user in the system to perform privacy attacks such as gradient inversion, and even gain full control over honest users' local model. We also show that, given the state of the art in protections, privacy-preserving configurations of decentralized learning require fully connected networks, losing any practical advantage over the federated setup and therefore completely defeating the objective of the decentralized approach.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源