论文标题
神经通道:VR的实时视图合成
NeuralPassthrough: Learned Real-Time View Synthesis for VR
论文作者
论文摘要
虚拟现实(VR)耳机提供了一种身临其境的立体视觉体验,但以阻止用户直接观察其物理环境的代价。传递技术旨在通过利用向外的摄像头来重建否则没有耳机的用户可以看到的图像来解决这一限制。这本质上是一个实时视图综合挑战,因为传递相机不能与眼睛在物理上共同分配。现有的通行技术会遭受分散重建工件的注意力,这主要是由于缺乏准确的深度信息(尤其是对于近场和分离的物体),并且表现有限(例如,低分辨率和单色)。在本文中,我们提出了第一种学习的传递方法,并使用包含立体声摄像机的自定义VR耳机评估其性能。通过模拟和实验,我们证明了我们所学的传递方法与最先进的方法相比提供了卓越的图像质量,同时满足了实时,透视透视的立体视图综合的严格VR要求,而不是台式连接的头脑的广泛视野。
Virtual reality (VR) headsets provide an immersive, stereoscopic visual experience, but at the cost of blocking users from directly observing their physical environment. Passthrough techniques are intended to address this limitation by leveraging outward-facing cameras to reconstruct the images that would otherwise be seen by the user without the headset. This is inherently a real-time view synthesis challenge, since passthrough cameras cannot be physically co-located with the eyes. Existing passthrough techniques suffer from distracting reconstruction artifacts, largely due to the lack of accurate depth information (especially for near-field and disoccluded objects), and also exhibit limited image quality (e.g., being low resolution and monochromatic). In this paper, we propose the first learned passthrough method and assess its performance using a custom VR headset that contains a stereo pair of RGB cameras. Through both simulations and experiments, we demonstrate that our learned passthrough method delivers superior image quality compared to state-of-the-art methods, while meeting strict VR requirements for real-time, perspective-correct stereoscopic view synthesis over a wide field of view for desktop-connected headsets.