论文标题
在临时人机团队中可解释的AI的实用性
The Utility of Explainable AI in Ad Hoc Human-Machine Teaming
论文作者
论文摘要
机器学习的最新进展导致人们对可解释的AI(XAI)的兴趣越来越大,使人类能够深入了解机器学习模型的决策。尽管最近有这种兴趣,但XAI技术的效用尚未在人机组合中表征。重要的是,XAI提供了增强团队情境意识(SA)和共享心理模型发展的希望,这是有效的人机团队的关键特征。快速开发这种心理模型在临时人机团队中尤其重要,因为代理人对他人的决策策略没有先验知识。在本文中,我们提出了两个新颖的人类受试者实验,以量化在人机组合场景中部署XAI技术的好处。首先,我们表明XAI技术可以支持SA($ P <0.05)$。其次,我们研究了通过协作AI政策抽象引起的不同SA级别如何影响临时人机组合绩效。重要的是,我们发现XAI的好处不是普遍的,因为对人类机器团队的组成有很大的依赖。新手受益于XAI提供的SA($ P <0.05 $),但容易受到认知开销的影响($ P <0.05 $)。另一方面,专家绩效会随着基于XAI的支持($ p <0.05 $)而降低,这表明关注XAI的成本超过了从提供的其他信息中获得的收益以增强SA所获得的收益。我们的结果表明,研究人员必须通过仔细考虑人机团队组成以及XAI方法如何增强SA来故意在正确的情况下故意设计和部署正确的XAI技术。
Recent advances in machine learning have led to growing interest in Explainable AI (xAI) to enable humans to gain insight into the decision-making of machine learning models. Despite this recent interest, the utility of xAI techniques has not yet been characterized in human-machine teaming. Importantly, xAI offers the promise of enhancing team situational awareness (SA) and shared mental model development, which are the key characteristics of effective human-machine teams. Rapidly developing such mental models is especially critical in ad hoc human-machine teaming, where agents do not have a priori knowledge of others' decision-making strategies. In this paper, we present two novel human-subject experiments quantifying the benefits of deploying xAI techniques within a human-machine teaming scenario. First, we show that xAI techniques can support SA ($p<0.05)$. Second, we examine how different SA levels induced via a collaborative AI policy abstraction affect ad hoc human-machine teaming performance. Importantly, we find that the benefits of xAI are not universal, as there is a strong dependence on the composition of the human-machine team. Novices benefit from xAI providing increased SA ($p<0.05$) but are susceptible to cognitive overhead ($p<0.05$). On the other hand, expert performance degrades with the addition of xAI-based support ($p<0.05$), indicating that the cost of paying attention to the xAI outweighs the benefits obtained from being provided additional information to enhance SA. Our results demonstrate that researchers must deliberately design and deploy the right xAI techniques in the right scenario by carefully considering human-machine team composition and how the xAI method augments SA.