论文标题
可解释的知识图嵌入:针对知识推论的推理,支持机器人行动
Explainable Knowledge Graph Embedding: Inference Reconciliation for Knowledge Inferences Supporting Robot Actions
论文作者
论文摘要
支持机器人的学习知识图表包含驱动机器人行为的大量领域知识。但是,没有一个推论对帐框架,该框架表达了知识图表示如何影响机器人的顺序决策。我们使用教学方法来解释学习的,黑框知识图表示,知识图嵌入的推断。我们的可解释模型使用决策树分类器在局部近似黑框模型的预测,并提供自然语言解释,可通过非专家解释。我们的算法评估的结果确认了我们的模型设计选择,而使用非专家的用户研究结果支持了对拟议的推理对帐框架的需求。至关重要的是,我们的模拟机器人评估的结果表明,由于黑盒中的非敏感信念,我们的解释使非专家能够纠正不稳定的机器人行为。
Learned knowledge graph representations supporting robots contain a wealth of domain knowledge that drives robot behavior. However, there does not exist an inference reconciliation framework that expresses how a knowledge graph representation affects a robot's sequential decision making. We use a pedagogical approach to explain the inferences of a learned, black-box knowledge graph representation, a knowledge graph embedding. Our interpretable model, uses a decision tree classifier to locally approximate the predictions of the black-box model, and provides natural language explanations interpretable by non-experts. Results from our algorithmic evaluation affirm our model design choices, and the results of our user studies with non-experts support the need for the proposed inference reconciliation framework. Critically, results from our simulated robot evaluation indicate that our explanations enable non-experts to correct erratic robot behaviors due to nonsensical beliefs within the black-box.