论文标题
用于私人分散学习的图形同态扰动
Graph-Homomorphic Perturbations for Private Decentralized Learning
论文作者
论文摘要
用于随机优化和学习的分散算法依赖于信息的扩散,这是由于中间估计的重复局部交换而导致的。在代理商可能犹豫要由于隐私问题而共享原始数据的情况下,这种结构特别有吸引力。然而,在没有其他隐私机制的情况下,基于私人数据生成的本地估计的交换可以允许推断数据本身。保证隐私的最常见机制是在广播前向本地估计增加扰动。这些扰动通常是在每个代理商中独立选择的,从而造成了重大的性能损失。我们提出了一个替代方案,该方案根据特定的NullSpace条件构建扰动,使它们是不可见的(在步进大小中的一阶)到网络质心,同时保留隐私保证。该分析允许一般的非凸损失功能,因此适用于大量的机器学习和信号处理问题,包括深度学习。
Decentralized algorithms for stochastic optimization and learning rely on the diffusion of information as a result of repeated local exchanges of intermediate estimates. Such structures are particularly appealing in situations where agents may be hesitant to share raw data due to privacy concerns. Nevertheless, in the absence of additional privacy-preserving mechanisms, the exchange of local estimates, which are generated based on private data can allow for the inference of the data itself. The most common mechanism for guaranteeing privacy is the addition of perturbations to local estimates before broadcasting. These perturbations are generally chosen independently at every agent, resulting in a significant performance loss. We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible (to first order in the step-size) to the network centroid, while preserving privacy guarantees. The analysis allows for general nonconvex loss functions, and is hence applicable to a large number of machine learning and signal processing problems, including deep learning.