论文标题
幻影海绵:利用非最大抑制作用以攻击深对象探测器
Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep Object Detectors
论文作者
论文摘要
在过去的几年中,已经对基于深度学习的对象探测器进行了对抗性攻击。提出的大多数攻击都针对模型的完整性(即导致模型做出了错误的预测),而针对模型可用性的对抗性攻击,这是机器学习研究社区尚未探索安全关键领域(例如自主驾驶)的关键方面。在本文中,我们提出了一种新颖的攻击,对端到端对象检测管道的决策潜伏期产生负面影响。我们制作了一种通用的对抗扰动(UAP),该扰动(UAP)的目标是集成在许多对象检测器管道中的广泛使用的技术 - 非最大抑制(NMS)。我们的实验证明了提出的UAP通过添加“幻影”对象来增加单个框架的处理时间的能力,这些对象使NMS算法超载过载,同时保留对原始对象的检测,从而使攻击可以在更长的时间内未被发现。
Adversarial attacks against deep learning-based object detectors have been studied extensively in the past few years. Most of the attacks proposed have targeted the model's integrity (i.e., caused the model to make incorrect predictions), while adversarial attacks targeting the model's availability, a critical aspect in safety-critical domains such as autonomous driving, have not yet been explored by the machine learning research community. In this paper, we propose a novel attack that negatively affects the decision latency of an end-to-end object detection pipeline. We craft a universal adversarial perturbation (UAP) that targets a widely used technique integrated in many object detector pipelines -- non-maximum suppression (NMS). Our experiments demonstrate the proposed UAP's ability to increase the processing time of individual frames by adding "phantom" objects that overload the NMS algorithm while preserving the detection of the original objects which allows the attack to go undetected for a longer period of time.