论文标题
冷静攻击:用不可察觉的噪声使跟踪器蒙蔽
Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises
论文作者
论文摘要
CNN的对抗性攻击旨在欺骗模型,通过在图像中添加不可察觉的扰动来表现不佳。此功能有助于深入了解神经网络并改善深度学习模型的鲁棒性。尽管几项工作集中在攻击图像分类器和对象探测器上,但仍缺乏一种有效而有效的方法来攻击任何目标的单个对象跟踪器,但仍缺乏。在本文中,提出了一种冷却障碍攻击方法,以欺骗基于暹罗PN的最先进的跟踪器。有效而有效的扰动发生器经过精心设计的对抗损失进行训练,该损失可以同时冷却热图上的目标,并迫使预测的边界框以缩小,从而使跟踪器的跟踪目标不可见。 OTB100,DOUT2018和LASOT数据集的许多实验表明,我们的方法可以通过在模板或搜索区域中添加小的扰动来有效地欺骗最新的siameserpn ++跟踪器。此外,我们的方法具有良好的可传递性,并且能够欺骗其他最佳绩效跟踪器,例如Dasiamrpn,Dasiamrpn-updatenet和Dimp。源代码可在https://github.com/masterbin-iiau/csa上找到。
Adversarial attack of CNN aims at deceiving models to misbehave by adding imperceptible perturbations to images. This feature facilitates to understand neural networks deeply and to improve the robustness of deep learning models. Although several works have focused on attacking image classifiers and object detectors, an effective and efficient method for attacking single object trackers of any target in a model-free way remains lacking. In this paper, a cooling-shrinking attack method is proposed to deceive state-of-the-art SiameseRPN-based trackers. An effective and efficient perturbation generator is trained with a carefully designed adversarial loss, which can simultaneously cool hot regions where the target exists on the heatmaps and force the predicted bounding box to shrink, making the tracked target invisible to trackers. Numerous experiments on OTB100, VOT2018, and LaSOT datasets show that our method can effectively fool the state-of-the-art SiameseRPN++ tracker by adding small perturbations to the template or the search regions. Besides, our method has good transferability and is able to deceive other top-performance trackers such as DaSiamRPN, DaSiamRPN-UpdateNet, and DiMP. The source codes are available at https://github.com/MasterBin-IIAU/CSA.