论文标题

培训不确定性意识的分类符,并具有整形的深度学习

Training Uncertainty-Aware Classifiers with Conformalized Deep Learning

论文作者

Einbinder, Bat-Sheva, Romano, Yaniv, Sesia, Matteo, Zhou, Yanfei

论文摘要

深度神经网络是检测数据中隐藏模式并利用它们进行预测的强大工具,但它们并不是为了了解不确定性和估计可靠的概率。特别是,它们往往过于自信。我们通过开发一种新型的培训算法,生产具有更可靠的不确定性估计的模型,而无需牺牲预测能力,从而在多级分类的背景下开始解决这个问题。这个想法是通过将损失函数降至最低的灵感来减少过度自信,这是受共形推理的进步的启发,该损失函数通过仔细利用保留数据来量化模型不确定性。与最先进的替代方案相比,合成和真实数据的实验证明了这种方法可以导致较小的形成条件预测集,并具有更高的条件覆盖范围。

Deep neural networks are powerful tools to detect hidden patterns in data and leverage them to make predictions, but they are not designed to understand uncertainty and estimate reliable probabilities. In particular, they tend to be overconfident. We begin to address this problem in the context of multi-class classification by developing a novel training algorithm producing models with more dependable uncertainty estimates, without sacrificing predictive power. The idea is to mitigate overconfidence by minimizing a loss function, inspired by advances in conformal inference, that quantifies model uncertainty by carefully leveraging hold-out data. Experiments with synthetic and real data demonstrate this method can lead to smaller conformal prediction sets with higher conditional coverage, after exact calibration with hold-out data, compared to state-of-the-art alternatives.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源