论文标题
降级攻击和防御:将您看到的东西变成所获得的东西
Downscaling Attack and Defense: Turning What You See Back Into What You Get
论文作者
论文摘要
图像的调整通常是计算机视觉系统预处理的必需部分,很容易受到攻击。可以创建图像,以便在机器视觉尺度上与其他量表完全不同,并且某些常见的计算机视觉和机器学习系统的默认设置很脆弱。我们表明,只要捍卫者意识到威胁,我们就会存在防御措施,并且很容易管理。这些攻击和防御有助于确定输入消毒在机器学习中的作用。
The resizing of images, which is typically a required part of preprocessing for computer vision systems, is vulnerable to attack. Images can be created such that the image is completely different at machine-vision scales than at other scales and the default settings for some common computer vision and machine learning systems are vulnerable. We show that defenses exist and are trivial to administer provided that defenders are aware of the threat. These attacks and defenses help to establish the role of input sanitization in machine learning.