论文标题

私人联合的子模型学习和稀疏

Private Federated Submodel Learning with Sparsification

论文作者

Vithana, Sajani, Ulukus, Sennur

论文摘要

我们研究了私人读取更新写入(PRUW)的问题,并在联合子模型学习(FSL)中进行了稀疏。在FSL中,机器学习模型被分为多个子模型,每个用户仅更新与用户本地数据相关的子模型。 PRUW是通过阅读并写入所需子模型的私下执行FSL的过程,而无需揭示子模型索引或数据库更新的值。稀疏性是一种在学习中广泛使用的概念,用户仅更新一小部分参数以降低通信成本。揭示这些选定(稀疏)更新的坐标泄漏用户的隐私。我们展示了如何通过稀疏来执行FSL中的PRUW。我们提出了一个新颖的方案,该方案私下读取并写入任何给定子模型的任意参数,而无需揭示子模型索引,更新值或稀疏更新的坐标到数据库。与在没有稀疏期的情况下相比,拟议的计划的阅读和写作成本大大降低了。

We investigate the problem of private read update write (PRUW) in federated submodel learning (FSL) with sparsification. In FSL, a machine learning model is divided into multiple submodels, where each user updates only the submodel that is relevant to the user's local data. PRUW is the process of privately performing FSL by reading from and writing to the required submodel without revealing the submodel index, or the values of updates to the databases. Sparsification is a widely used concept in learning, where the users update only a small fraction of parameters to reduce the communication cost. Revealing the coordinates of these selected (sparse) updates leaks privacy of the user. We show how PRUW in FSL can be performed with sparsification. We propose a novel scheme which privately reads from and writes to arbitrary parameters of any given submodel, without revealing the submodel index, values of updates, or the coordinates of the sparse updates, to databases. The proposed scheme achieves significantly lower reading and writing costs compared to what is achieved without sparsification.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源