论文标题

D-NERF:动态场景的神经辐射场

D-NeRF: Neural Radiance Fields for Dynamic Scenes

论文作者

Pumarola, Albert, Corona, Enric, Pons-Moll, Gerard, Moreno-Noguer, Francesc

论文摘要

将机器学习与几何推理结合起来的神经渲染技术已成为从一组稀疏图像中综合现场的新颖观点的最有希望的方法之一。其中,训练一个深网的神经辐射场(NERF)训练了一个深网,以映射5D输入坐标(代表空间位置和观看方向),成为体积密度和观看依赖性发射光辐射。但是,尽管在生成的图像上达到了前所未有的光真实性水平,但NERF仅适用于静态场景,在静态场景中,可以从不同的图像中查询相同的空间位置。在本文中,我们介绍了D-NERF,该方法将神经辐射场扩展到动态域,从而从\ emph {single}摄像机围绕场景中重建并渲染对象的新颖图像。为此,我们将时间视为对系统的附加输入,并在两个主要阶段将学习过程分开:一个将场景编码为规范空间,而另一个将此规范表示形式映射到特定时间的现场。使用完全连接的网络同时学习两个映射。一旦训练了网络,D-NERF就可以渲染新颖的图像,控制相机视图和时间变量,从而控制对象运动。我们证明了方法在刚性,明确和非刚性动作下的物体中的场景中的有效性。代码,模型权重和动态场景数据集将发布。

Neural rendering techniques combining machine learning with geometric reasoning have arisen as one of the most promising approaches for synthesizing novel views of a scene from a sparse set of images. Among these, stands out the Neural radiance fields (NeRF), which trains a deep network to map 5D input coordinates (representing spatial location and viewing direction) into a volume density and view-dependent emitted radiance. However, despite achieving an unprecedented level of photorealism on the generated images, NeRF is only applicable to static scenes, where the same spatial location can be queried from different images. In this paper we introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain, allowing to reconstruct and render novel images of objects under rigid and non-rigid motions from a \emph{single} camera moving around the scene. For this purpose we consider time as an additional input to the system, and split the learning process in two main stages: one that encodes the scene into a canonical space and another that maps this canonical representation into the deformed scene at a particular time. Both mappings are simultaneously learned using fully-connected networks. Once the networks are trained, D-NeRF can render novel images, controlling both the camera view and the time variable, and thus, the object movement. We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions. Code, model weights and the dynamic scenes dataset will be released.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源