论文标题
无饱和的gan训练作为差异最小化
Non-saturating GAN training as divergence minimization
论文作者
论文摘要
非饱和生成对抗网络(GAN)训练被广泛使用,并继续获得突破性的结果。然而,到目前为止,这种方法缺乏强大的理论理由,与诸如F-Gans和Wasserstein Gans之类的替代方案相反,这些替代方案是在近似差异最小化方面的动机。在本文中,我们表明,非饱和的GAN训练实际上确实可以最大程度地减少特定的F差异。我们开发了一般的理论工具来比较和分类F-Diverence,并使用这些工具来表明新的F-Diverence在质量上与反向KL相似。这些结果有助于解释使用该方案时经验观察到的高样本质量,但多样性差异很差。
Non-saturating generative adversarial network (GAN) training is widely used and has continued to obtain groundbreaking results. However so far this approach has lacked strong theoretical justification, in contrast to alternatives such as f-GANs and Wasserstein GANs which are motivated in terms of approximate divergence minimization. In this paper we show that non-saturating GAN training does in fact approximately minimize a particular f-divergence. We develop general theoretical tools to compare and classify f-divergences and use these to show that the new f-divergence is qualitatively similar to reverse KL. These results help to explain the high sample quality but poor diversity often observed empirically when using this scheme.