论文标题

通用扰动攻击对可区分的参考图像和视频质量指标

Universal Perturbation Attack on Differentiable No-Reference Image- and Video-Quality Metrics

论文作者

Shumitskaya, Ekaterina, Antsiferova, Anastasia, Vatolin, Dmitriy

论文摘要

通用的对抗扰动攻击被广泛用于分析采用卷积神经网络的图像分类器。如今,一些攻击可以欺骗图像和视频质量指标。因此,对这些指标的可持续性分析很重要。确实,如果攻击可能会使度量混淆,则攻击者可以轻松提高质量得分。当图像和视频算法的开发人员通过分离的处理可以提高他们的分数时,算法比较不再公平。受到分类器普遍对抗扰动的想法的启发,我们建议一种通过普遍扰动来攻击可不同参考质量指标的新方法。我们将此方法应用于七个无引用图像和视频质量指标(PAQ-2-PIQ,Linearity,VSFA,MDTVSFA,Konept512,Nima和Spaq)。对于每个人,我们训练了一种普遍的扰动,从而增加了各自的分数。我们还提出了一种评估度量稳定性并确定最脆弱和对我们攻击最具抵抗力的指标。成功的普遍扰动的存在似乎降低了度量标准提供可靠分数的能力。因此,我们建议我们提出的方法作为对传统主观测试和基准测试的额外验证。

Universal adversarial perturbation attacks are widely used to analyze image classifiers that employ convolutional neural networks. Nowadays, some attacks can deceive image- and video-quality metrics. So sustainability analysis of these metrics is important. Indeed, if an attack can confuse the metric, an attacker can easily increase quality scores. When developers of image- and video-algorithms can boost their scores through detached processing, algorithm comparisons are no longer fair. Inspired by the idea of universal adversarial perturbation for classifiers, we suggest a new method to attack differentiable no-reference quality metrics through universal perturbation. We applied this method to seven no-reference image- and video-quality metrics (PaQ-2-PiQ, Linearity, VSFA, MDTVSFA, KonCept512, Nima and SPAQ). For each one, we trained a universal perturbation that increases the respective scores. We also propose a method for assessing metric stability and identify the metrics that are the most vulnerable and the most resistant to our attack. The existence of successful universal perturbations appears to diminish the metric's ability to provide reliable scores. We therefore recommend our proposed method as an additional verification of metric reliability to complement traditional subjective tests and benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源