论文标题
HDR环境图的实时增强现实估算
HDR Environment Map Estimation for Real-Time Augmented Reality
论文作者
论文摘要
我们提出了一种从狭窄的视野LDR相机图像实时估算HDR环境图的方法。这使感知吸引人的反射和对任何材料饰面的虚拟对象的阴影,从镜像到分散,使用增强现实呈现为真实的物理环境。我们的方法基于我们有效的卷积神经网络体系结构Envmapnet,端对端训练有两种新颖的损失,对生成的图像的投影和对抗性训练的簇。通过定性和定量的比较,我们证明了我们的算法将估计光源的定向误差降低了50%以上,并达到了3.7倍的特点成立距离(FID)。我们进一步展示了一个移动应用程序,该应用程序能够在iPhone XS的9毫秒以下运行我们的神经网络模型,并在以前看不见的现实世界环境中实时呈现,视觉上连贯的虚拟对象。
We present a method to estimate an HDR environment map from a narrow field-of-view LDR camera image in real-time. This enables perceptually appealing reflections and shading on virtual objects of any material finish, from mirror to diffuse, rendered into a real physical environment using augmented reality. Our method is based on our efficient convolutional neural network architecture, EnvMapNet, trained end-to-end with two novel losses, ProjectionLoss for the generated image, and ClusterLoss for adversarial training. Through qualitative and quantitative comparison to state-of-the-art methods, we demonstrate that our algorithm reduces the directional error of estimated light sources by more than 50%, and achieves 3.7 times lower Frechet Inception Distance (FID). We further showcase a mobile application that is able to run our neural network model in under 9 ms on an iPhone XS, and render in real-time, visually coherent virtual objects in previously unseen real-world environments.