论文标题
基于子目标的不可靠智能决策支持系统的解释
Subgoal-Based Explanations for Unreliable Intelligent Decision Support Systems
论文作者
论文摘要
智能决策支持(IDS)系统利用人工智能技术来生成建议,以指导人类用户通过任务的决策阶段。但是,一个关键的挑战是IDS系统并不完美,在复杂的现实情况下,可能会产生不正确的输出或无法完全工作。可解释的AI规划(XAIP)领域试图开发技术,以使顺序决策的决策使AI系统对最终用户更有解释。至关重要的是,将XAIP技术应用于IDS系统的先前工作认为,计划者提出的计划始终是最佳的,因此建议作为对用户的决策支持推荐的行动或计划始终是正确的。在这项工作中,我们检查了新手用户与非鲁斯IDS系统的互动 - 偶尔建议采取错误的操作,并且在用户习惯其指导之后可能无法使用。我们为基于计划的IDS系统介绍了一种新颖的解释类型,基于子目标的解释,该说明将传统IDS输出提供有关推荐行动将贡献的子目标的信息。我们证明,基于子目标的解释可改善用户任务的性能,提高用户区分最佳和次优IDS建议,用户首选的功能,并在IDS失败的情况下启用更强大的用户性能
Intelligent decision support (IDS) systems leverage artificial intelligence techniques to generate recommendations that guide human users through the decision making phases of a task. However, a key challenge is that IDS systems are not perfect, and in complex real-world scenarios may produce incorrect output or fail to work altogether. The field of explainable AI planning (XAIP) has sought to develop techniques that make the decision making of sequential decision making AI systems more explainable to end-users. Critically, prior work in applying XAIP techniques to IDS systems has assumed that the plan being proposed by the planner is always optimal, and therefore the action or plan being recommended as decision support to the user is always correct. In this work, we examine novice user interactions with a non-robust IDS system -- one that occasionally recommends the wrong action, and one that may become unavailable after users have become accustomed to its guidance. We introduce a novel explanation type, subgoal-based explanations, for planning-based IDS systems, that supplements traditional IDS output with information about the subgoal toward which the recommended action would contribute. We demonstrate that subgoal-based explanations lead to improved user task performance, improve user ability to distinguish optimal and suboptimal IDS recommendations, are preferred by users, and enable more robust user performance in the case of IDS failure