论文标题
共同信息在变异分类器中的作用
The Role of Mutual Information in Variational Classifiers
论文作者
论文摘要
过度拟合数据是一种众所周知的现象,该现象与模型的生成相关,该模型过于模仿数据的特定实例,因此可能无法可靠地预测未来的观察结果。在实践中,这种行为受到各种启发式方法的控制 - 调节技术,这些技术是通过将上限开发到概括误差的动机。在这项工作中,我们研究了依赖于跨透明损失训练的随机编码的分类器的概括错误,该编码经常用于分类问题的深度学习。我们将界限得出限制到概括误差,表明存在一种概括性误差在输入特征和潜在空间中相应表示之间的相互信息界定的机制,这些信息是根据编码分布随机生成的。我们的界限提供了对所谓的变分分类器类别中的概括的信息理论理解,该分类器由kullback-leibler(KL)差异术语正规化。这些结果为众所周知的KL术语提供了理论上的理由,而KL的变异推理方法已经被认为有效地作为正规化惩罚。我们进一步观察了与经过良好研究的概念的联系,例如变异自动编码器,信息辍学,信息瓶颈和玻尔兹曼机器。最后,我们对MNIST和CIFAR数据集执行数值实验,并表明互信息确实高度代表了概括误差的行为。
Overfitting data is a well-known phenomenon related with the generation of a model that mimics too closely (or exactly) a particular instance of data, and may therefore fail to predict future observations reliably. In practice, this behaviour is controlled by various--sometimes heuristics--regularization techniques, which are motivated by developing upper bounds to the generalization error. In this work, we study the generalization error of classifiers relying on stochastic encodings trained on the cross-entropy loss, which is often used in deep learning for classification problems. We derive bounds to the generalization error showing that there exists a regime where the generalization error is bounded by the mutual information between input features and the corresponding representations in the latent space, which are randomly generated according to the encoding distribution. Our bounds provide an information-theoretic understanding of generalization in the so-called class of variational classifiers, which are regularized by a Kullback-Leibler (KL) divergence term. These results give theoretical grounds for the highly popular KL term in variational inference methods that was already recognized to act effectively as a regularization penalty. We further observe connections with well studied notions such as Variational Autoencoders, Information Dropout, Information Bottleneck and Boltzmann Machines. Finally, we perform numerical experiments on MNIST and CIFAR datasets and show that mutual information is indeed highly representative of the behaviour of the generalization error.