论文标题

图形归一化流

Graphical Normalizing Flows

论文作者

Wehenkel, Antoine, Louppe, Gilles

论文摘要

通过将基本分布与一系​​列徒的神经网络相结合,将流量归一化模型复杂的概率分布。最先进的体系结构依靠耦合和自回归转换来提高从标量到向量的可逆功能。在这项工作中,我们将这些转换重新审视为概率图形模型,表明它们减少到具有预定义拓扑的贝叶斯网络,并且每个节点在每个节点上都有可学习的密度。从这个新的角度来看,我们提出了图形归一化流,这是一种具有规定或可学习的图形结构的新的可逆转换。该模型提供了一种有希望的方法,可以将域知识注入归一化流,同时保留贝叶斯网络的解释性和标准化流量的表示能力。我们表明,当我们无法假设它时,图形调节剂会发现相关的图形结构。此外,我们分析了$ \ ell_1 $ - 二纳尔化对恢复结构的影响以及所得密度估计的质量。最后,我们表明图形调节器会导致竞争性的白盒密度估计器。我们的实施可在https://github.com/awehenkel/dag-nf上获得。

Normalizing flows model complex probability distributions by combining a base distribution with a series of bijective neural networks. State-of-the-art architectures rely on coupling and autoregressive transformations to lift up invertible functions from scalars to vectors. In this work, we revisit these transformations as probabilistic graphical models, showing they reduce to Bayesian networks with a pre-defined topology and a learnable density at each node. From this new perspective, we propose the graphical normalizing flow, a new invertible transformation with either a prescribed or a learnable graphical structure. This model provides a promising way to inject domain knowledge into normalizing flows while preserving both the interpretability of Bayesian networks and the representation capacity of normalizing flows. We show that graphical conditioners discover relevant graph structure when we cannot hypothesize it. In addition, we analyze the effect of $\ell_1$-penalization on the recovered structure and on the quality of the resulting density estimation. Finally, we show that graphical conditioners lead to competitive white box density estimators. Our implementation is available at https://github.com/AWehenkel/DAG-NF.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源