论文标题

变分的贝叶斯大学

Variational Bayesian Unlearning

论文作者

Nguyen, Quoc Phong, Low, Bryan Kian Hsiang, Jaillet, Patrick

论文摘要

本文研究了从一小部分要删除的培训数据中近似学习的贝叶斯模型的问题。我们将这个问题构成最小化模型参数的近似后验信仰之间的kullback-leibler差异的方法之一,直接从擦除的数据与剩余数据进行重新恢复的确切后验信仰。使用变异推理(VI)框架,我们表明,它等同于最大程度地减少证据上限,该证据上限在完全删除的数据中完全删除学习与不完全忘记了鉴于完整数据(即包括其余数据)的后验信仰;后者可以防止灾难性的学习,从而使模型无用。在通过VI的模型培训中,只能获得大约(而不是精确)的后验信念,鉴于完整的数据,这使得无法学习更具挑战性。我们提出了两个新颖的技巧来应对这一挑战。我们从经验上证明了我们在贝叶斯模型上使用稀疏高斯过程和使用合成和现实世界数据集的逻辑回归的方法。

This paper studies the problem of approximately unlearning a Bayesian model from a small subset of the training data to be erased. We frame this problem as one of minimizing the Kullback-Leibler divergence between the approximate posterior belief of model parameters after directly unlearning from erased data vs. the exact posterior belief from retraining with remaining data. Using the variational inference (VI) framework, we show that it is equivalent to minimizing an evidence upper bound which trades off between fully unlearning from erased data vs. not entirely forgetting the posterior belief given the full data (i.e., including the remaining data); the latter prevents catastrophic unlearning that can render the model useless. In model training with VI, only an approximate (instead of exact) posterior belief given the full data can be obtained, which makes unlearning even more challenging. We propose two novel tricks to tackle this challenge. We empirically demonstrate our unlearning methods on Bayesian models such as sparse Gaussian process and logistic regression using synthetic and real-world datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源