论文标题

快速和缓慢的决定:认知偏见在AI辅助决策中的作用

Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making

论文作者

Rastogi, Charvi, Zhang, Yunfeng, Wei, Dennis, Varshney, Kush R., Dhurandhar, Amit, Tomsett, Richard

论文摘要

几项研究的目的是弥合人工智能(AI)与人工决策者中的人工智能之间的差距,其中人类是AI模型预测的消费者,以及在高风险应用中的最终决策者的消费者。但是,人们的认知和理解通常会因其认知偏见而扭曲,例如确认偏见,锚定偏见,可用性偏见等等。在这项工作中,我们使用认知科学领域的知识来说明人类协作决策环境中的认知偏见,并减轻其对协作绩效的负面影响。为此,我们在数学上对认知偏见进行了建模,并提供了一个一般框架,通过这些框架,研究人员和从业人员可以理解认知偏见与人类准确性之间的相互作用。然后,我们专门致力于锚定偏见,这是人类协作中通常遇到的偏见。我们实施了基于时间的脱锚策略,并进行了我们的第一个用户实验,该实验验证了其在人类协作决策中的有效性。通过此结果,我们为在某些假设下实现最佳人类协作的资源约束设置设计了时间分配策略。然后,我们进行了第二个用户实验,该实验表明,我们的时间分配策略可以有效地脱离人类,并在AI模型信心较低且不正确时提高协作绩效。

Several strands of research have aimed to bridge the gap between artificial intelligence (AI) and human decision-makers in AI-assisted decision-making, where humans are the consumers of AI model predictions and the ultimate decision-makers in high-stakes applications. However, people's perception and understanding are often distorted by their cognitive biases, such as confirmation bias, anchoring bias, availability bias, to name a few. In this work, we use knowledge from the field of cognitive science to account for cognitive biases in the human-AI collaborative decision-making setting, and mitigate their negative effects on collaborative performance. To this end, we mathematically model cognitive biases and provide a general framework through which researchers and practitioners can understand the interplay between cognitive biases and human-AI accuracy. We then focus specifically on anchoring bias, a bias commonly encountered in human-AI collaboration. We implement a time-based de-anchoring strategy and conduct our first user experiment that validates its effectiveness in human-AI collaborative decision-making. With this result, we design a time allocation strategy for a resource-constrained setting that achieves optimal human-AI collaboration under some assumptions. We, then, conduct a second user experiment which shows that our time allocation strategy with explanation can effectively de-anchor the human and improve collaborative performance when the AI model has low confidence and is incorrect.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源