论文标题
快速本地攻击:生成对象探测器的本地对抗示例
Fast Local Attack: Generating Local Adversarial Examples for Object Detectors
论文作者
论文摘要
深度神经网络容易受到对抗性例子的影响。在图像中添加不可察觉的对抗扰动足以使它们失败。大多数现有的研究都集中在攻击图像分类器或基于锚的对象检测器上,但它们在整个图像上产生全球扰动,这是不必要的。在我们的工作中,我们利用高级语义信息来为无锚对象探测器产生高积极的本地扰动。结果,它在计算密集程度上较少,并且可以实现更高的黑盒攻击以及转移攻击性能。我们方法生成的对抗示例不仅能够攻击无锚对象探测器,而且能够转移以攻击基于锚的对象检测器。
The deep neural network is vulnerable to adversarial examples. Adding imperceptible adversarial perturbations to images is enough to make them fail. Most existing research focuses on attacking image classifiers or anchor-based object detectors, but they generate globally perturbation on the whole image, which is unnecessary. In our work, we leverage higher-level semantic information to generate high aggressive local perturbations for anchor-free object detectors. As a result, it is less computationally intensive and achieves a higher black-box attack as well as transferring attack performance. The adversarial examples generated by our method are not only capable of attacking anchor-free object detectors, but also able to be transferred to attack anchor-based object detector.