论文标题
隐式增量方法
The Implicit Delta Method
论文作者
论文摘要
认知不确定性定量是从预测模型中得出可靠结论的关键部分,无论是关注给定点处的预测还是使用模型作为输入的任何下游评估。当预测模型很简单并且评估可区分时,该任务将通过DELTA方法求解,在该方法中,我们通过评估来传播预测模型中渐近差异的不确定性,以计算标准误差和WALD置信区间。但是,当模型和/或评估变得更加复杂时,这将变得困难。补救措施包括引导程序,但是在训练模型时,它在计算上可能是不可行的。在本文中,我们提出了一种替代方案,即隐式三角洲方法,该方法是通过无限规范化预测模型的训练损失来自动评估下游不确定性的。我们表明,即使有限的差异近似,评估估计量的渐近方差也与评估估计量的渐近方差相一致。这既可以根据标准误差的不确定性进行可靠的量化,也允许构建校准的置信区间。我们讨论了与贝叶斯和频繁主义者的其他不确定性定量方法的联系,并通过经验证明我们的方法。
Epistemic uncertainty quantification is a crucial part of drawing credible conclusions from predictive models, whether concerned about the prediction at a given point or any downstream evaluation that uses the model as input. When the predictive model is simple and its evaluation differentiable, this task is solved by the delta method, where we propagate the asymptotically-normal uncertainty in the predictive model through the evaluation to compute standard errors and Wald confidence intervals. However, this becomes difficult when the model and/or evaluation becomes more complex. Remedies include the bootstrap, but it can be computationally infeasible when training the model even once is costly. In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of the predictive model to automatically assess downstream uncertainty. We show that the change in the evaluation due to regularization is consistent for the asymptotic variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference. This provides both a reliable quantification of uncertainty in terms of standard errors as well as permits the construction of calibrated confidence intervals. We discuss connections to other approaches to uncertainty quantification, both Bayesian and frequentist, and demonstrate our approach empirically.