论文标题
通过通过样本质量区分扩散模型的快速采样器
Learning Fast Samplers for Diffusion Models by Differentiating Through Sample Quality
论文作者
论文摘要
扩散模型已成为一种表现力的生成模型家族,使甘恩在样本质量和可能分数的自回归模型中媲美。标准扩散模型通常需要数百个正向通过模型以生成单个高保真样本。我们引入了可区分的扩散采样器搜索(DDSS):一种方法,通过通过样本质量得分来区分任何预训练的扩散模型的快速采样器。我们还提出了广义的高斯扩散模型(GGDM),这是一个用于扩散模型的柔性非马克维亚采样器的家族。我们表明,通过通过梯度下降最大化样本质量得分,可以优化GGDM采样器的自由度,从而提高了样品质量。我们的优化过程通过采样过程进行了重新传播,并使用重新训练技巧和梯度重新布置。 DDSS在各种数据集中无条件的图像产生(例如,LSUN教堂的FID得分为128x128 of 11.6,只有10个推理步骤,而20个步骤为4.82,而51.1和14.9具有最强的DDPM/DDPM/DDIM基准,则取得了良好的结果。我们的方法与任何预训练的扩散模型兼容,而无需进行微调或重新训练。
Diffusion models have emerged as an expressive family of generative models rivaling GANs in sample quality and autoregressive models in likelihood scores. Standard diffusion models typically require hundreds of forward passes through the model to generate a single high-fidelity sample. We introduce Differentiable Diffusion Sampler Search (DDSS): a method that optimizes fast samplers for any pre-trained diffusion model by differentiating through sample quality scores. We also present Generalized Gaussian Diffusion Models (GGDM), a family of flexible non-Markovian samplers for diffusion models. We show that optimizing the degrees of freedom of GGDM samplers by maximizing sample quality scores via gradient descent leads to improved sample quality. Our optimization procedure backpropagates through the sampling process using the reparametrization trick and gradient rematerialization. DDSS achieves strong results on unconditional image generation across various datasets (e.g., FID scores on LSUN church 128x128 of 11.6 with only 10 inference steps, and 4.82 with 20 steps, compared to 51.1 and 14.9 with strongest DDPM/DDIM baselines). Our method is compatible with any pre-trained diffusion model without fine-tuning or re-training required.