论文标题
可解释性室里的大象:当我们有显着方法时,为什么要使用注意力作为解释?
The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?
论文作者
论文摘要
最近,人们对将注意力用作模型预测的解释引起了人们的兴趣,并有不同的证据表明是否可以使用注意力。虽然注意力方便地给了我们每个输入令牌的重量,并且很容易提取,但通常尚不清楚将其用作解释的目标。我们发现,无论是否明确说明,这个目标经常是找出与预测最相关的输入令牌,而解释的隐含用户是模型开发人员。对于这个目标和用户,我们认为输入显着性方法更适合,并且没有令人信服的理由来利用注意力,尽管它为每个输入提供了一个权重。有了这份立场论文,我们希望将最近关注的关注点转移到显着性方法上,并使作者明确指出目标和用户的解释。
There is a recent surge of interest in using attention as explanation of model predictions, with mixed evidence on whether attention can be used as such. While attention conveniently gives us one weight per input token and is easily extracted, it is often unclear toward what goal it is used as explanation. We find that often that goal, whether explicitly stated or not, is to find out what input tokens are the most relevant to a prediction, and that the implied user for the explanation is a model developer. For this goal and user, we argue that input saliency methods are better suited, and that there are no compelling reasons to use attention, despite the coincidence that it provides a weight for each input. With this position paper, we hope to shift some of the recent focus on attention to saliency methods, and for authors to clearly state the goal and user for their explanations.