论文标题

揭示推向前向生成模型的潜在空间几何形状

Unveiling the Latent Space Geometry of Push-Forward Generative Models

论文作者

Issenhuth, Thibaut, Tanielian, Ugo, Mary, Jérémie, Picard, David

论文摘要

许多深层生成模型被定义为连续发生器的高斯度量的推动,例如生成对抗网络(GAN)或变分自动编码器(VAE)。这项工作探讨了这种深层生成模型的潜在空间。这些模型的一个关键问题是他们在学习断开分布时在目标分布支持之外输出样本的趋势。我们研究了这些模型的性能与其潜在空间的几何形状之间的关系。在几何措施理论的最新发展的基础上,我们证明了在潜在空间的尺寸大于模式数量的情况下,我们证明了最佳状态的足够条件。通过对gan的实验,我们证明了我们的理论结果的有效性,并获得了对这些模型潜在空间几何形状的新见解。此外,我们提出了一种截断方法,该方法可以在潜在空间中强制实施简单的聚类结构并提高gan的性能。

Many deep generative models are defined as a push-forward of a Gaussian measure by a continuous generator, such as Generative Adversarial Networks (GANs) or Variational Auto-Encoders (VAEs). This work explores the latent space of such deep generative models. A key issue with these models is their tendency to output samples outside of the support of the target distribution when learning disconnected distributions. We investigate the relationship between the performance of these models and the geometry of their latent space. Building on recent developments in geometric measure theory, we prove a sufficient condition for optimality in the case where the dimension of the latent space is larger than the number of modes. Through experiments on GANs, we demonstrate the validity of our theoretical results and gain new insights into the latent space geometry of these models. Additionally, we propose a truncation method that enforces a simplicial cluster structure in the latent space and improves the performance of GANs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源