论文标题

可逆跳跃PDMP采样器用于可变选择

Reversible Jump PDMP Samplers for Variable Selection

论文作者

Chevallier, Augustin, Fearnhead, Paul, Sutton, Matthew

论文摘要

基于模拟分段确定性马尔可夫流程(PDMP)的新一类马尔可夫链蒙特卡洛(MCMC)算法最近显示出巨大的希望:它们是不可逆的,可以比标准的MCMC算法更好,并且可以使用亚示例来加快大量数据方案的速度计算。但是,当前的PDMP采样器只能从几乎到处都有区分的后密度中采样,这排除了它们用于模型选择的使用。在可变选择问题的启发下,我们展示了如何开发可逆的跳跃PDMP采样器,这些采样器可以共同探索模型的离散空间和参数的连续空间。我们的框架是一般的:它采用任何现有的PDMP采样器,并添加了两种类型的跨维移动,可以从模型中添加或删除变量。我们展示了如何计算这些跨维移动的速率,以使采样器具有正确的不变分布。模拟表明,新的采样器可以比标准MCMC算法更好。我们的经验结果表明,它们也比基于梯度的采样器更有效,这些采样器通过使用连续的尖峰和slab先验避免模型选择,这些尖峰和slab先验将每个​​参数的点质量取代一个零的点质量,密度浓度较高。

A new class of Markov chain Monte Carlo (MCMC) algorithms, based on simulating piecewise deterministic Markov processes (PDMPs), have recently shown great promise: they are non-reversible, can mix better than standard MCMC algorithms, and can use subsampling ideas to speed up computation in big data scenarios. However, current PDMP samplers can only sample from posterior densities that are differentiable almost everywhere, which precludes their use for model choice. Motivated by variable selection problems, we show how to develop reversible jump PDMP samplers that can jointly explore the discrete space of models and the continuous space of parameters. Our framework is general: it takes any existing PDMP sampler, and adds two types of trans-dimensional moves that allow for the addition or removal of a variable from the model. We show how the rates of these trans-dimensional moves can be calculated so that the sampler has the correct invariant distribution. Simulations show that the new samplers can mix better than standard MCMC algorithms. Our empirical results show they are also more efficient than gradient-based samplers that avoid model choice through use of continuous spike-and-slab priors which replace a point mass at zero for each parameter with a density concentrated around zero.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源