论文标题

对单眼姿势估计的对抗性攻击

Adversarial Attacks on Monocular Pose Estimation

论文作者

Chawla, Hemang, Varma, Arnav, Arani, Elahe, Zonooz, Bahram

论文摘要

深度学习的进步已导致计算机视觉的稳定进步,并提高了对象检测和语义细分等任务的准确性。然而,深层神经网络容易受到对抗攻击的影响,因此在可靠的部署中提出了挑战。 3D场景对机器人技术和高级驱动辅助系统的理解中的两个重要任务是单眼的深度和姿势估计,通常以无监督的方式一起学习。尽管存在评估对抗性攻击对单眼深度估计的影响的研究,但缺乏对对抗性扰动对姿势估计的系统性证明和分析。我们展示了加性不可感知的扰动不仅可以改变预测以增加轨迹漂移,还可以改变其几何形状。我们还研究了针对单眼深度和姿势估计网络的对抗性扰动之间的关系,以及将扰动转移到具有不同架构和损失的其他网络之间的关系。我们的实验表明,生成的扰动如何导致相对旋转和翻译预测的显着错误,并阐明了网络的漏洞。

Advances in deep learning have resulted in steady progress in computer vision with improved accuracy on tasks such as object detection and semantic segmentation. Nevertheless, deep neural networks are vulnerable to adversarial attacks, thus presenting a challenge in reliable deployment. Two of the prominent tasks in 3D scene-understanding for robotics and advanced drive assistance systems are monocular depth and pose estimation, often learned together in an unsupervised manner. While studies evaluating the impact of adversarial attacks on monocular depth estimation exist, a systematic demonstration and analysis of adversarial perturbations against pose estimation are lacking. We show how additive imperceptible perturbations can not only change predictions to increase the trajectory drift but also catastrophically alter its geometry. We also study the relation between adversarial perturbations targeting monocular depth and pose estimation networks, as well as the transferability of perturbations to other networks with different architectures and losses. Our experiments show how the generated perturbations lead to notable errors in relative rotation and translation predictions and elucidate vulnerabilities of the networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源