论文标题

联想:扩散模型的置信区间

Conffusion: Confidence Intervals for Diffusion Models

论文作者

Horwitz, Eliahu, Hoshen, Yedid

论文摘要

扩散模型已成为许多生成任务的首选方法,尤其是对于图像到图像生成任务,例如超分辨率和介入。当前基于扩散的方法没有提供有关生成结果的统计保证,通常会阻止其在高风险情况下使用。为了弥合此差距,我们围绕每个生成的像素构建一个置信区间,以确保像用户设置的概率在间隔内确保其真实值落在间隔内。由于扩散模型参数化数据分布,因此构建此类间隔的直接方法是绘制多个样本并计算其边界。但是,此方法具有几个缺点:i)缓慢的采样速度ii)次优界iii)需要每个任务训练一个扩散模型。为了减轻这些缺点,我们提出了结合,其中我们微调一个预训练的扩散模型,以预测单个正向通行中的间隔界限。我们表明,Conffusion的表现要优于基线方法,而三个数量级的速度更快。

Diffusion models have become the go-to method for many generative tasks, particularly for image-to-image generation tasks such as super-resolution and inpainting. Current diffusion-based methods do not provide statistical guarantees regarding the generated results, often preventing their use in high-stakes situations. To bridge this gap, we construct a confidence interval around each generated pixel such that the true value of the pixel is guaranteed to fall within the interval with a probability set by the user. Since diffusion models parametrize the data distribution, a straightforward way of constructing such intervals is by drawing multiple samples and calculating their bounds. However, this method has several drawbacks: i) slow sampling speeds ii) suboptimal bounds iii) requires training a diffusion model per task. To mitigate these shortcomings we propose Conffusion, wherein we fine-tune a pre-trained diffusion model to predict interval bounds in a single forward pass. We show that Conffusion outperforms the baseline method while being three orders of magnitude faster.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源