论文标题
评估医疗仪表板的框架
A Framework for Evaluating Dashboards in Healthcare
论文作者
论文摘要
在“信息超载”时代,有效的信息提供对于实现快速响应和关键决策至关重要。为了理解各种信息源,数据仪表板已成为必不可少的工具,为专业人员和公共公众提供快速,有效,适应性和个性化信息访问。但是,这些目标将仪表板作为信息系统的巨大要求,导致可用性差和无效的设计。考虑到缺乏一致且全面的仪表板评估方法,了解这些缺口是一个挑战。在本文中,我们系统地回顾了有关医疗保健领域仪表板实施的文献,该领域已广泛采用了仪表板,并且在改善当前的艺术状态以及随后分析评估方法的方法中具有广泛的兴趣。我们借鉴了合并的仪表板文献和自己的观察结果,以引入仪表板的一般定义,该定义与当前趋势更相关,以及基于仪表板任务的分类,这是我们随后的分析的基础。从总共81篇论文中,我们得出了七个评估方案 - 任务绩效,行为改变,交互工作流程,感知的参与度,潜在实用程序,算法性能和系统实现。这些场景区分了我们通过测量,示例研究和评估研究设计中的共同挑战来说明的不同评估目的。我们提供了每种评估方案的细分,并突出了一些微妙且不太张开的问题。最后,我们概述了许多积极的讨论点和一组仪表板评估的最佳实践,适用于学术,临床和软件开发社区。
In the era of "information overload", effective information provision is essential for enabling rapid response and critical decision making. In making sense of diverse information sources, data dashboards have become an indispensable tool, providing fast, effective, adaptable, and personalized access to information for professionals and the general public alike. However, these objectives place a heavy requirement on dashboards as information systems, resulting in poor usability and ineffective design. Understanding these shortfalls is a challenge given the absence of a consistent and comprehensive approach to dashboard evaluation. In this paper we systematically review literature on dashboard implementation in the healthcare domain, a field where dashboards have been employed widely, and in which there is widespread interest for improving the current state of the art, and subsequently analyse approaches taken towards evaluation. We draw upon consolidated dashboard literature and our own observations to introduce a general definition of dashboards which is more relevant to current trends, together with a dashboard task-based classification, which underpin our subsequent analysis. From a total of 81 papers we derive seven evaluation scenarios - task performance, behaviour change, interaction workflow, perceived engagement, potential utility, algorithm performance and system implementation. These scenarios distinguish different evaluation purposes which we illustrate through measurements, example studies, and common challenges in evaluation study design. We provide a breakdown of each evaluation scenario, and highlight some of the subtle and less well posed questions. We conclude by outlining a number of active discussion points and a set of dashboard evaluation best practices for the academic, clinical and software development communities alike.