论文标题

要求了解更多:为假索赔产生反事实解释

Ask to Know More: Generating Counterfactual Explanations for Fake Claims

论文作者

Dai, Shih-Chieh, Hsu, Yi-Li, Xiong, Aiping, Ku, Lun-Wei

论文摘要

已经提出了自动化事实检查系统,该系统迅速提供了大规模的真实预测,以减轻虚假新闻对人们和公众舆论的负面影响。但是,大多数研究都集中在这些系统的真实分类器上,这些分类器仅预测新闻文章的真实性。我们认为有效的事实检查也依赖于人们对预测的理解。在本文中,我们提出了使用反事实解释来阐明事实检查预测,以帮助人们理解为什么特定的新闻被确定为假货。在这项工作中,为假新闻产生反事实解释涉及三个步骤:提出好问题,找到矛盾和适当的推理。我们将这个研究问题构成了通过问题回答(QA)矛盾的理由推理。我们首先向虚假索赔提出问题,并从相关证据文件中检索潜在的答案。然后,我们通过使用元素分类器来确定对虚假主张的最矛盾的答案。最后,使用匹配的QA对创建了反事实解释,并具有三种不同的反事实说明表格。实验是在热数据集上进行系统和人类评估的。结果表明,与最先进的方法相比,提出的方法产生了最有用的解释。

Automated fact checking systems have been proposed that quickly provide veracity prediction at scale to mitigate the negative influence of fake news on people and on public opinion. However, most studies focus on veracity classifiers of those systems, which merely predict the truthfulness of news articles. We posit that effective fact checking also relies on people's understanding of the predictions. In this paper, we propose elucidating fact checking predictions using counterfactual explanations to help people understand why a specific piece of news was identified as fake. In this work, generating counterfactual explanations for fake news involves three steps: asking good questions, finding contradictions, and reasoning appropriately. We frame this research question as contradicted entailment reasoning through question answering (QA). We first ask questions towards the false claim and retrieve potential answers from the relevant evidence documents. Then, we identify the most contradictory answer to the false claim by use of an entailment classifier. Finally, a counterfactual explanation is created using a matched QA pair with three different counterfactual explanation forms. Experiments are conducted on the FEVER dataset for both system and human evaluations. Results suggest that the proposed approach generates the most helpful explanations compared to state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源