论文标题
内容多样性比较改善了IQA
Content-Diverse Comparisons improve IQA
论文作者
论文摘要
图像质量评估(IQA)对人类构成了自然而通常直接的事业,但是对任务的有效自动化仍然是极具挑战性的。来自深度学习社区的最新指标通常会比较培训期间的图像对,以改善传统指标,例如PSNR或SSIM。但是,当前比较忽略了图像内容会影响质量评估的事实,因为仅在相似内容的图像之间进行比较。这限制了模型在训练过程中暴露的图像对的多样性和数量。在本文中,我们努力将这些比较与内容多样性丰富。首先,我们放宽比较约束,并比较与内容不同的图像对。这增加了各种可用的比较。其次,我们介绍ListWise比较,为模型提供整体视图。通过包含从相关系数衍生的可区分正规化器,模型可以更好地调整相对于彼此的预测分数。涵盖多种扭曲和图像内容的多个基准测试的评估显示了我们学习方案对培训图像质量评估模型的有效性。
Image quality assessment (IQA) forms a natural and often straightforward undertaking for humans, yet effective automation of the task remains highly challenging. Recent metrics from the deep learning community commonly compare image pairs during training to improve upon traditional metrics such as PSNR or SSIM. However, current comparisons ignore the fact that image content affects quality assessment as comparisons only occur between images of similar content. This restricts the diversity and number of image pairs that the model is exposed to during training. In this paper, we strive to enrich these comparisons with content diversity. Firstly, we relax comparison constraints, and compare pairs of images with differing content. This increases the variety of available comparisons. Secondly, we introduce listwise comparisons to provide a holistic view to the model. By including differentiable regularizers, derived from correlation coefficients, models can better adjust predicted scores relative to one another. Evaluation on multiple benchmarks, covering a wide range of distortions and image content, shows the effectiveness of our learning scheme for training image quality assessment models.