论文标题
部分可观测时空混沌系统的无模型预测
Trustworthy clinical AI solutions: a unified review of uncertainty quantification in deep learning models for medical image analysis
论文作者
论文摘要
就文献中报道的高性能解决方案的数量而言,临床领域中深度学习(DL)模型的全部接受程度相当低。特别是,最终用户不愿依靠DL模型的粗略预测。文献中已经提出了不确定性量化方法,作为减少DL黑匣子提供的粗略决策的潜在响应,从而提高了最终用户结果的可解释性和可接受性。在这篇综述中,我们提出了概述现有方法,以量化与DL预测相关的不确定性。我们专注于医学图像分析的应用,这些应用是由于图像的高维度及其质量可变性以及与现实生活中临床常规相关的约束而提出了特定的挑战。然后,我们讨论评估方案以验证不确定性估计的相关性。最后,我们强调了医学领域不确定性量化的公开挑战。
The full acceptance of Deep Learning (DL) models in the clinical field is rather low with respect to the quantity of high-performing solutions reported in the literature. Particularly, end users are reluctant to rely on the rough predictions of DL models. Uncertainty quantification methods have been proposed in the literature as a potential response to reduce the rough decision provided by the DL black box and thus increase the interpretability and the acceptability of the result by the final user. In this review, we propose an overview of the existing methods to quantify uncertainty associated to DL predictions. We focus on applications to medical image analysis, which present specific challenges due to the high dimensionality of images and their quality variability, as well as constraints associated to real-life clinical routine. We then discuss the evaluation protocols to validate the relevance of uncertainty estimates. Finally, we highlight the open challenges of uncertainty quantification in the medical field.