论文标题
防雨:保护文本发生器的雨伞免于分布数据
Rainproof: An Umbrella To Shield Text Generators From Out-Of-Distribution Data
论文作者
论文摘要
实施有效的控制机制,以确保从翻译到聊天机器人的部署NLP模型的正确功能和安全性。确保安全系统行为的关键要素是分布(OOD)检测的,旨在检测输入样本是否远离培训分布。尽管OOD检测是分类任务中广泛涵盖的主题,但大多数方法都依赖编码器输出的隐藏功能。在这项工作中,我们专注于在黑框框架中利用软性概况,即我们可以访问软预测,而不能访问模型的内部状态。我们的贡献包括:(i)防雨的相对信息投影OOD检测框架; (ii)用于OOD检测的更操作评估设置。令人惊讶的是,我们发现OOD检测不一定与特定于任务的措施保持一致。 OOD检测器可能会滤除模型处理良好的样品,并保留未效果的样品,从而导致性能较弱。我们的结果表明,与传统的OOD检测器相比,雨雨提供了与特定于任务的性能指标更一致的OOD检测方法。
Implementing effective control mechanisms to ensure the proper functioning and security of deployed NLP models, from translation to chatbots, is essential. A key ingredient to ensure safe system behaviour is Out-Of-Distribution (OOD) detection, which aims to detect whether an input sample is statistically far from the training distribution. Although OOD detection is a widely covered topic in classification tasks, most methods rely on hidden features output by the encoder. In this work, we focus on leveraging soft-probabilities in a black-box framework, i.e. we can access the soft-predictions but not the internal states of the model. Our contributions include: (i) RAINPROOF a Relative informAItioN Projection OOD detection framework; and (ii) a more operational evaluation setting for OOD detection. Surprisingly, we find that OOD detection is not necessarily aligned with task-specific measures. The OOD detector may filter out samples well processed by the model and keep samples that are not, leading to weaker performance. Our results show that RAINPROOF provides OOD detection methods more aligned with task-specific performance metrics than traditional OOD detectors.