论文标题
焦点堆栈的深度深度与置焦模型用于摄像机设定不变性
Deep Depth from Focal Stack with Defocus Model for Camera-Setting Invariance
论文作者
论文摘要
我们从焦点/散焦(DFF)提出了一个基于学习的深度,该深度将焦点堆栈作为估计场景深度的输入。 Defocus Blur是深度估计的有用提示。但是,模糊的大小不仅取决于场景深度,还取决于相机设置,例如焦点距离,焦距和f数字。如果摄像机设置在训练和测试时间方面有所不同,那么当前基于学习的方法没有任何散焦模型,无法估计正确的深度图。我们的方法将平面扫描量作为场景深度,散焦图像和相机设置之间的约束的输入,并且此中间表示可以在训练和测试时间时使用不同的相机设置进行深度估计。这种相机设定的不变性可以增强基于学习的DFF方法的适用性。实验结果还表明,我们的方法对合成到现实的域间隙具有鲁棒性,并表现出最先进的性能。
We propose a learning-based depth from focus/defocus (DFF), which takes a focal stack as input for estimating scene depth. Defocus blur is a useful cue for depth estimation. However, the size of the blur depends on not only scene depth but also camera settings such as focus distance, focal length, and f-number. Current learning-based methods without any defocus models cannot estimate a correct depth map if camera settings are different at training and test times. Our method takes a plane sweep volume as input for the constraint between scene depth, defocus images, and camera settings, and this intermediate representation enables depth estimation with different camera settings at training and test times. This camera-setting invariance can enhance the applicability of learning-based DFF methods. The experimental results also indicate that our method is robust against a synthetic-to-real domain gap, and exhibits state-of-the-art performance.