论文标题

关于深度聚类模型的鲁棒性:对抗性攻击和防御

On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses

论文作者

Chhabra, Anshuman, Sekhari, Ashwin, Mohapatra, Prasant

论文摘要

聚类模型构成了一类无监督的机器学习方法,这些方法用于许多应用程序管道中,并且在现代数据科学中起着至关重要的作用。随着深度学习的最新进展,在传统的聚类方法上,尤其是对于高维图像数据集,深度聚类模型已成为最新的最新模型。尽管已经从稳健性的角度分析了传统的聚类方法,但没有先前的工作以原则性的方式调查了对对抗性攻击和鲁棒性。为了弥合这一差距,我们建议使用生成对抗网络(GAN)进行黑框攻击,在该网络中,对手不知道正在使用哪种深度聚类模型,但可以将其查询到输出中。我们分析了针对多种最先进的深层聚类模型和现实数据集的攻击,并发现它非常成功。然后,我们采用了一些自然无监督的防御方法,但发现这些方法无法减轻我们的攻击。最后,我们攻击了一种生产级别的面部聚类API服务,并发现我们也可以显着降低其性能。因此,通过这项工作,我们旨在激发真正强大的深层聚类模型的需求。

Clustering models constitute a class of unsupervised machine learning methods which are used in a number of application pipelines, and play a vital role in modern data science. With recent advancements in deep learning -- deep clustering models have emerged as the current state-of-the-art over traditional clustering approaches, especially for high-dimensional image datasets. While traditional clustering approaches have been analyzed from a robustness perspective, no prior work has investigated adversarial attacks and robustness for deep clustering models in a principled manner. To bridge this gap, we propose a blackbox attack using Generative Adversarial Networks (GANs) where the adversary does not know which deep clustering model is being used, but can query it for outputs. We analyze our attack against multiple state-of-the-art deep clustering models and real-world datasets, and find that it is highly successful. We then employ some natural unsupervised defense approaches, but find that these are unable to mitigate our attack. Finally, we attack Face++, a production-level face clustering API service, and find that we can significantly reduce its performance as well. Through this work, we thus aim to motivate the need for truly robust deep clustering models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源