论文标题

贝叶斯变异自动编码器是否知道他们不知道的东西?

Do Bayesian Variational Autoencoders Know What They Don't Know?

论文作者

Glazunov, Misha, Zarras, Apostolis

论文摘要

检测分布外(OOD)输入的问题对于深神经网络至关重要。以前已经表明,即使允许估算输入密度的深层生成模型也可能不可靠,并且通常倾向于对OOD进行过度自信的预测,从而将其分配给它们的密度要高于分布数据。单个模型中的这种过度自信可以通过贝叶斯推断对考虑认知不确定性的模型参数的推断可能会缓解。本文研究了贝叶斯推断的三种方法:随机梯度马尔可夫链蒙特卡洛,贝叶斯通过反向传播和平均高斯的随机重量。该推论是在深度神经网络的权重上实施的,这些神经网络参数化变异自动编码器的可能性。我们经验评估了通常用于OOD检测的几个基准测试的方法:利用采样模型集合,典型性测试,分歧得分和沃特纳贝 - 阿基克信息标准的边际可能性估计。最后,我们介绍了两个简单的分数,展示了最先进的性能。

The problem of detecting the Out-of-Distribution (OoD) inputs is of paramount importance for Deep Neural Networks. It has been previously shown that even Deep Generative Models that allow estimating the density of the inputs may not be reliable and often tend to make over-confident predictions for OoDs, assigning to them a higher density than to the in-distribution data. This over-confidence in a single model can be potentially mitigated with Bayesian inference over the model parameters that take into account epistemic uncertainty. This paper investigates three approaches to Bayesian inference: stochastic gradient Markov chain Monte Carlo, Bayes by Backpropagation, and Stochastic Weight Averaging-Gaussian. The inference is implemented over the weights of the deep neural networks that parameterize the likelihood of the Variational Autoencoder. We empirically evaluate the approaches against several benchmarks that are often used for OoD detection: estimation of the marginal likelihood utilizing sampled model ensemble, typicality test, disagreement score, and Watanabe-Akaike Information Criterion. Finally, we introduce two simple scores that demonstrate the state-of-the-art performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源