论文标题

3D视频综合的流辐射场

Streaming Radiance Fields for 3D Video Synthesis

论文作者

Li, Lingzhi, Shen, Zhen, Wang, Zhongshu, Shen, Li, Tan, Ping

论文摘要

我们提出了一种基于明确的网格方法,用于有效地重建流辐射场,以综合现实世界动态场景。我们没有训练将所有帧结合起来的单个模型,而是将动态建模问题与增量学习范式提出,其中训练了每个框架模型差异,以补充当前帧对基本模型的适应。通过使用狭窄的频带利用简单而有效的调整策略,该提出的方法实现了一个可行的框架,可在高训练效率上处理视频序列。通过使用基于模型差的压缩,可以通过使用显式网格表示诱导的存储开销。我们还引入了一种有效的策略,以进一步加速每个帧的模型优化。对具有挑战性的视频序列进行的实验表明,我们的方法能够以竞争性的渲染质量实现人均训练速度为15秒,这在最先进的隐式方法上达到了$ 1000 \ times $ speedup。代码可在https://github.com/algohunt/streamrf上找到。

We present an explicit-grid based method for efficiently reconstructing streaming radiance fields for novel view synthesis of real world dynamic scenes. Instead of training a single model that combines all the frames, we formulate the dynamic modeling problem with an incremental learning paradigm in which per-frame model difference is trained to complement the adaption of a base model on the current frame. By exploiting the simple yet effective tuning strategy with narrow bands, the proposed method realizes a feasible framework for handling video sequences on-the-fly with high training efficiency. The storage overhead induced by using explicit grid representations can be significantly reduced through the use of model difference based compression. We also introduce an efficient strategy to further accelerate model optimization for each frame. Experiments on challenging video sequences demonstrate that our approach is capable of achieving a training speed of 15 seconds per-frame with competitive rendering quality, which attains $1000 \times$ speedup over the state-of-the-art implicit methods. Code is available at https://github.com/AlgoHunt/StreamRF.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源