论文标题

释放老虎:对分裂学习的推理攻击

Unleashing the Tiger: Inference Attacks on Split Learning

论文作者

Pasquini, Dario, Ateniese, Giuseppe, Bernaschi, Massimo

论文摘要

我们研究了分裂学习的安全性 - 一种新颖的协作机器学习框架,可以通过最少的资源消耗来实现峰值性能。在本文中,我们暴露了协议的漏洞,并通过引入针对客户私人培训集重建的一般攻击策略来证明其固有的不安全感。更突出的是,我们表明,恶意服务器可以主动劫持分布式模型的学习过程,并将其带入不安全状态,以实现对客户数据的推理攻击。我们实施了攻击的不同改编,并在各种数据集以及现实的威胁方案中进行测试。我们证明,我们的攻击能够克服最近提出的旨在增强分裂学习协议安全性的防御技术。最后,我们还通过扩展了以前设计的攻击来介绍该协议对恶意客户的不安全感。为了使结果可重复,我们在https://github.com/pasquini-dario/splitnn_fsha上提供了代码。

We investigate the security of Split Learning -- a novel collaborative machine learning framework that enables peak performance by requiring minimal resources consumption. In the present paper, we expose vulnerabilities of the protocol and demonstrate its inherent insecurity by introducing general attack strategies targeting the reconstruction of clients' private training sets. More prominently, we show that a malicious server can actively hijack the learning process of the distributed model and bring it into an insecure state that enables inference attacks on clients' data. We implement different adaptations of the attack and test them on various datasets as well as within realistic threat scenarios. We demonstrate that our attack is able to overcome recently proposed defensive techniques aimed at enhancing the security of the split learning protocol. Finally, we also illustrate the protocol's insecurity against malicious clients by extending previously devised attacks for Federated Learning. To make our results reproducible, we made our code available at https://github.com/pasquini-dario/SplitNN_FSHA.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源