论文标题
具有随机变化推断的广义高斯过程潜伏变量模型(GPLVM)
Generalised Gaussian Process Latent Variable Models (GPLVM) with Stochastic Variational Inference
论文作者
论文摘要
高斯流程潜在变量模型(GPLVM)是一种灵活且非线性的方法,可降低维度,将经典的高斯过程扩展到无监督的学习环境。 GPLVM Titsias和Lawrence的贝叶斯化身,2010年]使用了一个变异框架,其中潜在变量的后验与表现良好的变异家族近似,这是一个分解的高斯屈服于拖延的下限。但是,下边界的非事实能力阻止了真正的可扩展推理。在这项工作中,我们研究了贝叶斯GPLVM模型的双随机配方,可通过Minibatch训练进行。我们展示了该框架如何与不同的潜在变量配方兼容,并执行实验以比较一套模型。此外,我们演示了如何在大量缺少数据的存在下进行训练并获得高保真重建。我们通过针对高维数据示例的规范稀疏GPLVM进行基准测试来证明该模型的性能。
Gaussian process latent variable models (GPLVM) are a flexible and non-linear approach to dimensionality reduction, extending classical Gaussian processes to an unsupervised learning context. The Bayesian incarnation of the GPLVM Titsias and Lawrence, 2010] uses a variational framework, where the posterior over latent variables is approximated by a well-behaved variational family, a factorized Gaussian yielding a tractable lower bound. However, the non-factories ability of the lower bound prevents truly scalable inference. In this work, we study the doubly stochastic formulation of the Bayesian GPLVM model amenable with minibatch training. We show how this framework is compatible with different latent variable formulations and perform experiments to compare a suite of models. Further, we demonstrate how we can train in the presence of massively missing data and obtain high-fidelity reconstructions. We demonstrate the model's performance by benchmarking against the canonical sparse GPLVM for high-dimensional data examples.