论文标题

$e_γ$ -Divergence的收缩及其对隐私的应用

Contraction of $E_γ$-Divergence and Its Applications to Privacy

论文作者

Asoodeh, Shahab, Diaz, Mario, Calmon, Flavio P.

论文摘要

我们研究了$e_γ$ divergence的强数据处理不平等的收缩系数。通过从总变化距离到$e_γ$ -Divergence概括著名的Dobrushin系数,我们得出了$e_γ$ divergence收缩的封闭式表达。该结果在两个隐私设置中产生了根本后果。首先,这意味着可以按照$e_γ$ - 差异的收缩来表达当地差异隐私。该等效公式可用于准确量化局部隐私在(贝叶斯和最小值)估计和假设测试问题中的影响,以减少有效的样本量。其次,它导致了一种新的信息理论技术,用于分析在线算法的隐私保证。在此技术中,我们将这种算法视为振幅受限的高斯通道的组成,然后将其收缩系数与$e_γ$ - 差异下的收缩系数相关联与整体差异隐私保证。例如,我们应用技术来得出梯度下降的差异隐私参数。此外,我们还表明,该框架可以针对批处理学习算法进行量身定制,这些算法可以通过训练数据集的一个通行证来实现。

We investigate the contraction coefficients derived from strong data processing inequalities for the $E_γ$-divergence. By generalizing the celebrated Dobrushin's coefficient from total variation distance to $E_γ$-divergence, we derive a closed-form expression for the contraction of $E_γ$-divergence. This result has fundamental consequences in two privacy settings. First, it implies that local differential privacy can be equivalently expressed in terms of the contraction of $E_γ$-divergence. This equivalent formula can be used to precisely quantify the impact of local privacy in (Bayesian and minimax) estimation and hypothesis testing problems in terms of the reduction of effective sample size. Second, it leads to a new information-theoretic technique for analyzing privacy guarantees of online algorithms. In this technique, we view such algorithms as a composition of amplitude-constrained Gaussian channels and then relate their contraction coefficients under $E_γ$-divergence to the overall differential privacy guarantees. As an example, we apply our technique to derive the differential privacy parameters of gradient descent. Moreover, we also show that this framework can be tailored to batch learning algorithms that can be implemented with one pass over the training dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源