论文标题

无引用图像质量模型的感知攻击与人类的感知攻击

Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop

论文作者

Zhang, Weixia, Li, Dingquan, Min, Xiongkuo, Zhai, Guangtao, Guo, Guodong, Yang, Xiaokang, Ma, Kede

论文摘要

无参考图像质量评估(NR-IQA)旨在量化人类如何看待数字图像的视觉扭曲,而无需访问其未经证实的参考。 NR-IQA模型在计算视觉中进行了广泛的研究,并广泛用于人造视觉系统的性能评估和感知优化。在这里,我们是研究NR-IQA模型的感知鲁棒性的首次尝试之一。在Lagrangian的配方下,我们确定了拟议的感知攻击与计算机视觉和机器学习中以前美丽的想法的有见地的联系。我们在四个全参考IQA模型下测试了一个知识驱动和三种数据驱动的NR-IQA方法(作为对人类对纯粹的差异的近似值)。通过精心设计的心理物理实验,我们发现所有四个NR-IQA模型都容易受到建议的感知攻击的影响。更有趣的是,我们观察到生成的反例不是可转移的,表现为各自的NR-IQA方法的不同设计流。

No-reference image quality assessment (NR-IQA) aims to quantify how humans perceive visual distortions of digital images without access to their undistorted references. NR-IQA models are extensively studied in computational vision, and are widely used for performance evaluation and perceptual optimization of man-made vision systems. Here we make one of the first attempts to examine the perceptual robustness of NR-IQA models. Under a Lagrangian formulation, we identify insightful connections of the proposed perceptual attack to previous beautiful ideas in computer vision and machine learning. We test one knowledge-driven and three data-driven NR-IQA methods under four full-reference IQA models (as approximations to human perception of just-noticeable differences). Through carefully designed psychophysical experiments, we find that all four NR-IQA models are vulnerable to the proposed perceptual attack. More interestingly, we observe that the generated counterexamples are not transferable, manifesting themselves as distinct design flows of respective NR-IQA methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源