论文标题

通过强化学习的普遍分配基于决策的黑框对抗攻击

Universal Distributional Decision-based Black-box Adversarial Attack with Reinforcement Learning

论文作者

Huang, Yiran, Zhou, Yexu, Hefenbrock, Michael, Riedel, Till, Fang, Likun, Beigl, Michael

论文摘要

高性能机器学习模型的脆弱性意味着具有现实世界后果的应用程序的安全风险。对对抗攻击的研究有益于一方面指导机器学习模型的发展,另一方面找到有针对性的防御能力。但是,当今大多数对抗性攻击都利用模型中的梯度或logit信息来产生对抗性扰动。在更现实的领域中的作品:基于决策的攻击,仅基于观察目标模型的输出标签而产生对抗性扰动,仍然相对较少,并且主要使用梯度估计策略。在这项工作中,我们提出了一种基于像素的基于决策的攻击算法,该算法通过增强学习算法找到了对抗性扰动的分布。我们将这种基于决策方法的黑框攻击(​​DBAR)称为此方法。实验表明,所提出的方法的表现优于最先进的决策攻击,其攻击成功率和更大的可传递性。

The vulnerability of the high-performance machine learning models implies a security risk in applications with real-world consequences. Research on adversarial attacks is beneficial in guiding the development of machine learning models on the one hand and finding targeted defenses on the other. However, most of the adversarial attacks today leverage the gradient or logit information from the models to generate adversarial perturbation. Works in the more realistic domain: decision-based attacks, which generate adversarial perturbation solely based on observing the output label of the targeted model, are still relatively rare and mostly use gradient-estimation strategies. In this work, we propose a pixel-wise decision-based attack algorithm that finds a distribution of adversarial perturbation through a reinforcement learning algorithm. We call this method Decision-based Black-box Attack with Reinforcement learning (DBAR). Experiments show that the proposed approach outperforms state-of-the-art decision-based attacks with a higher attack success rate and greater transferability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源