论文标题
在深神网络中整合随机效应
Integrating Random Effects in Deep Neural Networks
论文作者
论文摘要
像深层神经网络(DNN)这样的监督学习的现代方法通常隐含地假设观察到的响应在统计上是独立的。相比之下,相关数据在现实生活中的大规模应用中很普遍,典型的相关来源包括空间,时间和聚类结构。这些相关性要么被DNN忽略,要么为特定用例开发了临时解决方案。我们建议使用混合模型框架处理DNN中的相关数据。通过将相关结构的效果视为随机效应,混合模型能够避免过度拟合的参数估计,并最终产生更好的预测性能。混合模型和DNN结合的关键是使用高斯负模样(NLL)作为一种自然损耗函数,该函数用包括随机梯度下降(SGD)在内的DNN机械最小化。由于NLL不像标准DNN损失函数那样分解,因此SGD与NLL的使用呈现出一些理论和实施挑战,我们要解决。在各种模拟和真实数据集的各种相关方案中,我们称之为LMMNN的方法可以提高自然竞争对手的性能。我们的重点是回归设置和表格数据集,但我们还显示了一些分类结果。我们的代码可在https://github.com/gsimchoni/lmmnn上找到。
Modern approaches to supervised learning like deep neural networks (DNNs) typically implicitly assume that observed responses are statistically independent. In contrast, correlated data are prevalent in real-life large-scale applications, with typical sources of correlation including spatial, temporal and clustering structures. These correlations are either ignored by DNNs, or ad-hoc solutions are developed for specific use cases. We propose to use the mixed models framework to handle correlated data in DNNs. By treating the effects underlying the correlation structure as random effects, mixed models are able to avoid overfitted parameter estimates and ultimately yield better predictive performance. The key to combining mixed models and DNNs is using the Gaussian negative log-likelihood (NLL) as a natural loss function that is minimized with DNN machinery including stochastic gradient descent (SGD). Since NLL does not decompose like standard DNN loss functions, the use of SGD with NLL presents some theoretical and implementation challenges, which we address. Our approach which we call LMMNN is demonstrated to improve performance over natural competitors in various correlation scenarios on diverse simulated and real datasets. Our focus is on a regression setting and tabular datasets, but we also show some results for classification. Our code is available at https://github.com/gsimchoni/lmmnn.