论文标题
通过Bregman近端梯度算法的正规化RényiDivergence最小化
Regularized Rényi divergence minimization through Bregman proximal gradient algorithms
论文作者
论文摘要
我们研究了将正规化的rényi差异与指数族家庭最小化的变异推理问题。我们建议通过Bregman近端梯度算法解决这个问题。我们提出了一种基于抽样的算法来覆盖黑框设置,对应于具有偏置梯度估计器的随机Bregman近端梯度算法。我们表明,所得算法可以将其视为带有额外近端步骤的放松力矩匹配算法。使用Bregman更新代替欧几里得更新,使我们能够利用近似模型的几何形状。我们证明,使用此角度,我们确定性和随机算法都可以保证我们的确定性和随机算法,包括单调降低物镜,收敛到固定点或最小化器以及几何收敛速率。这些新的理论见解导致了一种多功能,鲁棒和竞争的方法,如数值实验所示。
We study the variational inference problem of minimizing a regularized Rényi divergence over an exponential family. We propose to solve this problem with a Bregman proximal gradient algorithm. We propose a sampling-based algorithm to cover the black-box setting, corresponding to a stochastic Bregman proximal gradient algorithm with biased gradient estimator. We show that the resulting algorithms can be seen as relaxed moment-matching algorithms with an additional proximal step. Using Bregman updates instead of Euclidean ones allows us to exploit the geometry of our approximate model. We prove strong convergence guarantees for both our deterministic and stochastic algorithms using this viewpoint, including monotonic decrease of the objective, convergence to a stationary point or to the minimizer, and geometric convergence rates. These new theoretical insights lead to a versatile, robust, and competitive method, as illustrated by numerical experiments.