论文标题
消音:点对点隐私放大,以分散优化和平均
Muffliato: Peer-to-Peer Privacy Amplification for Decentralized Optimization and Averaging
论文作者
论文摘要
分散的优化在机器学习方面越来越受欢迎,其可伸缩性和效率。直觉上,它也应提供更好的隐私保证,因为节点只能观察其网络图中邻居发送的消息。但是,正式化和量化这一收益是具有挑战性的:现有结果通常仅限于当地差异隐私(LDP)保证忽略了权力下放的优势。在这项工作中,我们介绍了成对网络差异隐私,这是一份放松的LDP,该隐藏率捕获了一个事实,即从节点$ u $到节点$ v $的隐私泄漏可能取决于它们在图中的相对位置。然后,我们分析了局部噪声注入与固定和随机通信图上(简单或随机)八卦方案的组合。我们还得出了一种差异化的分散优化算法,该算法在局部梯度下降步骤和八卦平均之间进行交替。我们的结果表明,我们的算法放大隐私保证是图表中节点之间距离的函数,与受信任策展人的隐私性 - 实用性权衡匹配,直到明确取决于图形拓扑的因素。最后,我们通过有关合成和现实世界数据集的实验来说明我们的隐私收益。
Decentralized optimization is increasingly popular in machine learning for its scalability and efficiency. Intuitively, it should also provide better privacy guarantees, as nodes only observe the messages sent by their neighbors in the network graph. But formalizing and quantifying this gain is challenging: existing results are typically limited to Local Differential Privacy (LDP) guarantees that overlook the advantages of decentralization. In this work, we introduce pairwise network differential privacy, a relaxation of LDP that captures the fact that the privacy leakage from a node $u$ to a node $v$ may depend on their relative position in the graph. We then analyze the combination of local noise injection with (simple or randomized) gossip averaging protocols on fixed and random communication graphs. We also derive a differentially private decentralized optimization algorithm that alternates between local gradient descent steps and gossip averaging. Our results show that our algorithms amplify privacy guarantees as a function of the distance between nodes in the graph, matching the privacy-utility trade-off of the trusted curator, up to factors that explicitly depend on the graph topology. Finally, we illustrate our privacy gains with experiments on synthetic and real-world datasets.