论文标题
对基于图像的本地化神经网络的对抗性攻击
Adversarial Attack Against Image-Based Localization Neural Networks
论文作者
论文摘要
在本文中,我们提供了一种概念证明,用于对抗自动驾驶汽车的基于图像的本地化模块。这次攻击旨在使车辆执行错误的导航决策,并阻止其在模拟的城市环境中到达所需的预定义目的地。渲染图像的数据库使我们能够培训执行本地化任务并实施,开发和评估对抗性模式的深神经网络。我们的测试表明,使用这种对抗性攻击,我们可以防止车辆在给定的交叉路口转动。这是通过操纵车辆的导航模块错误地估算其当前位置的方法来完成的,因此无法初始化转弯程序,直到车辆错过了在给定交叉路口进行安全转向的最后机会。
In this paper, we present a proof of concept for adversarially attacking the image-based localization module of an autonomous vehicle. This attack aims to cause the vehicle to perform a wrong navigational decisions and prevent it from reaching a desired predefined destination in a simulated urban environment. A database of rendered images allowed us to train a deep neural network that performs a localization task and implement, develop and assess the adversarial pattern. Our tests show that using this adversarial attack we can prevent the vehicle from turning at a given intersection. This is done by manipulating the vehicle's navigational module to falsely estimate its current position and thus fail to initialize the turning procedure until the vehicle misses the last opportunity to perform a safe turn in a given intersection.