论文标题
使用全向图像完成来增强新型视图合成
Enhancement of Novel View Synthesis Using Omnidirectional Image Completion
论文作者
论文摘要
在这项研究中,我们提出了一种基于神经辐射场(NERF)的单个360度RGB-D图像合成新观点的方法。先前的研究依赖于多层感知器的邻居插值能力,该邻居插值能够完成由遮挡和变焦引起的完全缺失区域,这导致了伪影。在本研究提出的方法中,输入图像在其他摄像头位置的360度RGB图像进行了回归,重新注射图像的缺失区域由2D图像生成模型完成,并且完整的图像用于训练NERF。由于多个完整的图像包含3D中的不一致之处,因此我们使用完整的图像子集来学习NERF模型的方法,这些图像涵盖了目标场景,而完整的区域重叠较少。这样的图像子集的选择可以归因于最大重量独立集问题,该问题通过模拟退火解决。实验表明,所提出的方法可以在人工和现实世界数据的场景中保留场景的特征,使其合成合理的新颖观点。
In this study, we present a method for synthesizing novel views from a single 360-degree RGB-D image based on the neural radiance field (NeRF) . Prior studies relied on the neighborhood interpolation capability of multi-layer perceptrons to complete missing regions caused by occlusion and zooming, which leads to artifacts. In the method proposed in this study, the input image is reprojected to 360-degree RGB images at other camera positions, the missing regions of the reprojected images are completed by a 2D image generative model, and the completed images are utilized to train the NeRF. Because multiple completed images contain inconsistencies in 3D, we introduce a method to learn the NeRF model using a subset of completed images that cover the target scene with less overlap of completed regions. The selection of such a subset of images can be attributed to the maximum weight independent set problem, which is solved through simulated annealing. Experiments demonstrated that the proposed method can synthesize plausible novel views while preserving the features of the scene for both artificial and real-world data.