论文标题
MANIQA:无参考图像质量评估的多维注意网络
MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment
论文作者
论文摘要
无参考图像质量评估(NR-IQA)旨在根据人类的主观感知评估图像的感知质量。不幸的是,现有的NR-IQA方法远非满足预测基于GAN的失真图像准确质量得分的需求。为此,我们提出了无参考图像质量评估(MANIQA)的多维注意网络,以提高基于GAN的失真性能。我们首先通过VIT提取特征,然后加强全局和局部相互作用,我们提出了转移的注意块(TAB)和尺度SWIN变压器块(SSTB)。这两个模块分别在通道和空间维度上应用了注意机制。以这种多维方式,模块在全球和本地的不同图像区域之间协同增加了相互作用。最后,根据每个贴片分数的重量,应用了补丁加权质量预测的双分支结构来预测最终得分。实验结果表明,MANIQA在四个标准数据集(Live,TID2013,CSIQ和KADID-10K)上胜过最先进的方法。此外,我们的方法在NTIRE 2022感知图像质量评估挑战曲目2:无参考的最终测试阶段排名第一。代码和模型可在https://github.com/iigroup/maniqa上找到。
No-Reference Image Quality Assessment (NR-IQA) aims to assess the perceptual quality of images in accordance with human subjective perception. Unfortunately, existing NR-IQA methods are far from meeting the needs of predicting accurate quality scores on GAN-based distortion images. To this end, we propose Multi-dimension Attention Network for no-reference Image Quality Assessment (MANIQA) to improve the performance on GAN-based distortion. We firstly extract features via ViT, then to strengthen global and local interactions, we propose the Transposed Attention Block (TAB) and the Scale Swin Transformer Block (SSTB). These two modules apply attention mechanisms across the channel and spatial dimension, respectively. In this multi-dimensional manner, the modules cooperatively increase the interaction among different regions of images globally and locally. Finally, a dual branch structure for patch-weighted quality prediction is applied to predict the final score depending on the weight of each patch's score. Experimental results demonstrate that MANIQA outperforms state-of-the-art methods on four standard datasets (LIVE, TID2013, CSIQ, and KADID-10K) by a large margin. Besides, our method ranked first place in the final testing phase of the NTIRE 2022 Perceptual Image Quality Assessment Challenge Track 2: No-Reference. Codes and models are available at https://github.com/IIGROUP/MANIQA.