论文标题

零拍摄机器学习

Zero-Shot Machine Unlearning

论文作者

Chundawat, Vikram S, Tarun, Ayush K, Mandal, Murari, Kankanhalli, Mohan

论文摘要

现代隐私法规授予公民被产品,服务和公司遗忘的权利。在机器学习(ML)应用程序的情况下,这不仅需要从存储档案中删除数据,而且还需要从ML模型中删除数据。由于对ML应用所需的监管依从性的需求越来越大,因此机器的学习成为一个新兴的研究问题。被遗忘的请求的权利是从已训练的ML模型中删除一组或类别的数据的形式。丢弃已删除的数据后,实际的考虑因素排除了模型的重新研究。现有的少数研究使用了整个培训数据,或一部分培训数据,或者在培训期间存储的一些元数据以更新模型权重进行学习。但是,在许多情况下,出于未学习目的,无法访问与培训过程或培训样本有关的数据。因此,我们提出一个问题:是否有可能使用零培训样本实现学习?在本文中,我们介绍了零摄像机的新型问题,即迎合了极端但实用的方案,在该方案中,零原始数据样本可供使用。然后,我们根据(a)误差最小化最大最大噪声和(b)门控知识传递的误差提出了两种新的解决方案。这些方法在保持保留数据上的模型疗效的同时,从模型中删除了忘记数据的信息。零射击方法可以很好地保护模型反转攻击和成员推理攻击。我们引入了新的评估度量,解散指数(AIN),以有效地测量未学习方法的质量。实验显示了在基准视觉数据集的深度学习模型中学习的有希望的结果。源代码可在此处找到:https://github.com/ayu987/zero-shot-unlearning

Modern privacy regulations grant citizens the right to be forgotten by products, services and companies. In case of machine learning (ML) applications, this necessitates deletion of data not only from storage archives but also from ML models. Due to an increasing need for regulatory compliance required for ML applications, machine unlearning is becoming an emerging research problem. The right to be forgotten requests come in the form of removal of a certain set or class of data from the already trained ML model. Practical considerations preclude retraining of the model from scratch after discarding the deleted data. The few existing studies use either the whole training data, or a subset of training data, or some metadata stored during training to update the model weights for unlearning. However, in many cases, no data related to the training process or training samples may be accessible for the unlearning purpose. We therefore ask the question: is it possible to achieve unlearning with zero training samples? In this paper, we introduce the novel problem of zero-shot machine unlearning that caters for the extreme but practical scenario where zero original data samples are available for use. We then propose two novel solutions for zero-shot machine unlearning based on (a) error minimizing-maximizing noise and (b) gated knowledge transfer. These methods remove the information of the forget data from the model while maintaining the model efficacy on the retain data. The zero-shot approach offers good protection against the model inversion attacks and membership inference attacks. We introduce a new evaluation metric, Anamnesis Index (AIN) to effectively measure the quality of the unlearning method. The experiments show promising results for unlearning in deep learning models on benchmark vision data-sets. The source code is available here: https://github.com/ayu987/zero-shot-unlearning

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源