论文标题
潜在空间是特征空间:有限数据集上的gans培训的正规化术语
Latent Space is Feature Space: Regularization Term for GANs Training on Limited Dataset
论文作者
论文摘要
生成对抗网络(GAN)当前被广泛用作无监督的图像生成方法。当前的最新剂量可以产生具有高分辨率的影像图像。但是,需要大量数据,或者该模型容易生成具有相似模式(模式崩溃)和不良质量的图像。我提出了称为LFM的GAN的附加结构和损失功能,受过训练,以最大程度地提高潜在空间的不同维度之间的特征多样性,以避免模式崩溃而不影响图像质量。创建了正交的潜在矢量对,并通过DOT乘积研究了由歧视者提取的特征向量对,歧视器和生成器在这种新颖的对抗关系中。在实验中,该系统建立在DCGAN上,并证明对Frechet Inception距离(FID)在Celeba数据集上的培训有所改进。该系统需要温和的额外性能,并且可以使用数据增强方法。该代码可在github.com/penway/lfm上找到。
Generative Adversarial Networks (GAN) is currently widely used as an unsupervised image generation method. Current state-of-the-art GANs can generate photorealistic images with high resolution. However, a large amount of data is required, or the model would prone to generate images with similar patterns (mode collapse) and bad quality. I proposed an additional structure and loss function for GANs called LFM, trained to maximize the feature diversity between the different dimensions of the latent space to avoid mode collapse without affecting the image quality. Orthogonal latent vector pairs are created, and feature vector pairs extracted by discriminator are examined by dot product, with which discriminator and generator are in a novel adversarial relationship. In experiments, this system has been built upon DCGAN and proved to have improvement on Frechet Inception Distance (FID) training from scratch on CelebA Dataset. This system requires mild extra performance and can work with data augmentation methods. The code is available on github.com/penway/LFM.