论文标题
通过元学习的基于跨验证的保形预测,对设定预测变量进行了几次校准
Few-Shot Calibration of Set Predictors via Meta-Learned Cross-Validation-Based Conformal Prediction
论文作者
论文摘要
已知常规的频繁学习会产生校准较差的模型,无法可靠地量化其决策的不确定性。贝叶斯学习可以改善校准,但正式保证仅适用于对正确模型规范的限制性假设。共形预测(CP)为设计设定预测变量的设计提供了一个通用框架,可确保保持不到基础数据生成机制。但是,当训练数据受到限制时,CP倾向于产生大型,因此不知情的预测集。本文介绍了一种新型的元学习解决方案,旨在降低集合预测大小。与先前的工作不同,所提出的元学习方案称为元XB,(i)基于基于交叉验证的CP,而不是基于效率较低的基于验证的CP; (ii)保留正式的每任务校准保证,而不是更严格的任务 - 划分保证。最后,将meta-XB扩展到自适应非符号分数,从经验上证明,这些得分进一步增强了每输入的边际校准。
Conventional frequentist learning is known to yield poorly calibrated models that fail to reliably quantify the uncertainty of their decisions. Bayesian learning can improve calibration, but formal guarantees apply only under restrictive assumptions about correct model specification. Conformal prediction (CP) offers a general framework for the design of set predictors with calibration guarantees that hold regardless of the underlying data generation mechanism. However, when training data are limited, CP tends to produce large, and hence uninformative, predicted sets. This paper introduces a novel meta-learning solution that aims at reducing the set prediction size. Unlike prior work, the proposed meta-learning scheme, referred to as meta-XB, (i) builds on cross-validation-based CP, rather than the less efficient validation-based CP; and (ii) preserves formal per-task calibration guarantees, rather than less stringent task-marginal guarantees. Finally, meta-XB is extended to adaptive non-conformal scores, which are shown empirically to further enhance marginal per-input calibration.