论文标题

通过对抗训练提高GAN的速度和质量

Improving the Speed and Quality of GAN by Adversarial Training

论文作者

Zhong, Jiachen, Liu, Xuanqing, Hsieh, Cho-Jui

论文摘要

生成对抗网络(GAN)在图像生成任务中显示出了显着的结果。高保真阶级条件的GAN方法通常通过限制全球Lipschitz的连续性来依赖稳定技术。这种正则化导致表达模型较低,收敛速度较慢;其他技术,例如大型批次培训,需要非常规计算能力,并且不可广泛地访问。在本文中,我们开发了一种有效的算法,即Fastgan(免费对抗训练),以提高基于对抗性训练技术的GAN培训的速度和质量。我们将我们的方法基于CIFAR10(ImageNet的一个子集和完整的Imagenet数据集)进行基准测试。我们选择强大的基线,例如Sngan和Sagan。结果表明,我们的培训算法可以实现更好的生成质量(就其成立得分和特征成立距离而言),而整体训练时间较小。最值得注意的是,我们的培训算法通过需要2-4 GPU为更广泛的公众带来了Imagenet培训。

Generative adversarial networks (GAN) have shown remarkable results in image generation tasks. High fidelity class-conditional GAN methods often rely on stabilization techniques by constraining the global Lipschitz continuity. Such regularization leads to less expressive models and slower convergence speed; other techniques, such as the large batch training, require unconventional computing power and are not widely accessible. In this paper, we develop an efficient algorithm, namely FastGAN (Free AdverSarial Training), to improve the speed and quality of GAN training based on the adversarial training technique. We benchmark our method on CIFAR10, a subset of ImageNet, and the full ImageNet datasets. We choose strong baselines such as SNGAN and SAGAN; the results demonstrate that our training algorithm can achieve better generation quality (in terms of the Inception score and Frechet Inception distance) with less overall training time. Most notably, our training algorithm brings ImageNet training to the broader public by requiring 2-4 GPUs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源