论文标题
通过忠诚违规测试重新思考注意模型的解释性
Rethinking Attention-Model Explainability through Faithfulness Violation Test
论文作者
论文摘要
注意机制主要是深层模型的解释性。它们在输入上产生概率分布,该输入被广泛认为是特征对重要指标。但是,在本文中,我们发现注意力解释的一个关键局限性:识别特征影响的极性的弱点。这将是一种误导性 - 注意力较高的特征可能不会忠实地促进模型预测;相反,它们可以施加抑制作用。有了这一发现,我们反思了当前基于注意力的技术的解释性,例如ACTENTIO $ \ odot $梯度和基于LRP的注意解释。我们首先提出了可行的诊断方法(此后忠实违规测试),以衡量解释权重与影响极性之间的一致性。通过广泛的实验,我们表明大多数经过测试的解释方法出乎意料地受到违反忠诚问题的阻碍,尤其是原始关注。对影响违规问题的因素的经验分析进一步为采用注意模型中采用解释方法提供了有用的观察。
Attention mechanisms are dominating the explainability of deep models. They produce probability distributions over the input, which are widely deemed as feature-importance indicators. However, in this paper, we find one critical limitation in attention explanations: weakness in identifying the polarity of feature impact. This would be somehow misleading -- features with higher attention weights may not faithfully contribute to model predictions; instead, they can impose suppression effects. With this finding, we reflect on the explainability of current attention-based techniques, such as Attentio$\odot$Gradient and LRP-based attention explanations. We first propose an actionable diagnostic methodology (henceforth faithfulness violation test) to measure the consistency between explanation weights and the impact polarity. Through the extensive experiments, we then show that most tested explanation methods are unexpectedly hindered by the faithfulness violation issue, especially the raw attention. Empirical analyses on the factors affecting violation issues further provide useful observations for adopting explanation methods in attention models.