论文标题

QA2解释:通过知识图生成和评估问答系统的解释

QA2Explanation: Generating and Evaluating Explanations for Question Answering Systems over Knowledge Graph

论文作者

Shekarpour, Saeedeh, Nadgeri, Abhishek, Singh, Kuldeep

论文摘要

在大型知识图的时代,问题回答(QA)系统在其性能和可行性方面达到了里程碑。但是,它们的适用性,尤其是在特定领域,例如生物医学领域,由于其“黑匣子”性质并没有得到广泛的接受,这阻碍了QA系统的透明度,公平性和问责制。因此,用户无法理解如何以及为什么要回答特定问题,而其他一些问题失败了。为了应对这一挑战,在本文中,我们开发了一种自动方法,用于在基于管道的质量汇编系统的各个阶段生成解释。我们的方法是一种有监督和自动的方法,它考虑了三个类(即成功,没有答案和错误答案)来注释涉及QA组件的输出。根据我们的预测,选择了模板解释并集成到相应组件的输出中。为了衡量方法的有效性,我们对非专家用户如何看待我们生成的解释进行了一项用户调查。我们研究的结果表明,人类计算机相互作用界的人为因素的四个维度显着增加。

In the era of Big Knowledge Graphs, Question Answering (QA) systems have reached a milestone in their performance and feasibility. However, their applicability, particularly in specific domains such as the biomedical domain, has not gained wide acceptance due to their "black box" nature, which hinders transparency, fairness, and accountability of QA systems. Therefore, users are unable to understand how and why particular questions have been answered, whereas some others fail. To address this challenge, in this paper, we develop an automatic approach for generating explanations during various stages of a pipeline-based QA system. Our approach is a supervised and automatic approach which considers three classes (i.e., success, no answer, and wrong answer) for annotating the output of involved QA components. Upon our prediction, a template explanation is chosen and integrated into the output of the corresponding component. To measure the effectiveness of the approach, we conducted a user survey as to how non-expert users perceive our generated explanations. The results of our study show a significant increase in the four dimensions of the human factor from the Human-computer interaction community.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源