论文标题

处理贝叶斯学习规则中的正定限制

Handling the Positive-Definite Constraint in the Bayesian Learning Rule

论文作者

Lin, Wu, Schmidt, Mark, Khan, Mohammad Emtiyaz

论文摘要

贝叶斯学习规则是一种自然梯度变异推理方法,它不仅包含许多现有的学习算法作为特殊情况,而且还可以设计新算法。不幸的是,当变分参数位于开放约束集中时,该规则可能无法满足约束,并且需要线路搜索,以减慢算法。在这项工作中,我们通过提出一项自然处理约束的改进规则来解决积极限制的问题。我们的修改是通过使用Riemannian梯度方法获得的,并且当近似达到\ emph {block坐标自然参数化}时是有效的(例如,高斯分布及其混合物)。我们提出了一种有原则的方法来从头开始得出黎曼梯度和缩回。我们的方法的表现优于现有方法,而计算的大幅增加。我们的工作使在参数空间中存在正定限制的情况下应用规则变得更加容易。

The Bayesian learning rule is a natural-gradient variational inference method, which not only contains many existing learning algorithms as special cases but also enables the design of new algorithms. Unfortunately, when variational parameters lie in an open constraint set, the rule may not satisfy the constraint and requires line-searches which could slow down the algorithm. In this work, we address this issue for positive-definite constraints by proposing an improved rule that naturally handles the constraints. Our modification is obtained by using Riemannian gradient methods, and is valid when the approximation attains a \emph{block-coordinate natural parameterization} (e.g., Gaussian distributions and their mixtures). We propose a principled way to derive Riemannian gradients and retractions from scratch. Our method outperforms existing methods without any significant increase in computation. Our work makes it easier to apply the rule in the presence of positive-definite constraints in parameter spaces.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源