论文标题
MEIL-NERF:神经辐射场的记忆效率增量学习
MEIL-NeRF: Memory-Efficient Incremental Learning of Neural Radiance Fields
论文作者
论文摘要
神经辐射场(NERF)取决于神经网络的表示能力(NERF),已成为3D对象和场景表示的有前途且广泛适用的方法之一。但是,NERF在实际应用中面临挑战,例如大型场景和具有有限内存的边缘设备,需要依次处理数据。在这种增量学习方案下,已知神经网络会遭受灾难性的遗忘:在使用新数据培训后,很容易忘记先前看到的数据。我们观察到以前的增量学习算法受到低性能或内存可伸缩性问题的限制。因此,我们为NERF(MEIL-NERF)开发了一种记忆效率的增量学习算法。 Meil-nerf从NERF本身中汲取灵感,因为神经网络可以作为提供像素RGB值的内存,因为射线是查询。通过动机,我们的框架学习了哪些射线以查询nerf以提取以前的像素值。然后,提取的像素值用于以自我验证方式训练NERF,以防止灾难性遗忘。结果,梅尔纳夫(Meil-nerf)展示了持续的记忆消耗和竞争性能。
Hinged on the representation power of neural networks, neural radiance fields (NeRF) have recently emerged as one of the promising and widely applicable methods for 3D object and scene representation. However, NeRF faces challenges in practical applications, such as large-scale scenes and edge devices with a limited amount of memory, where data needs to be processed sequentially. Under such incremental learning scenarios, neural networks are known to suffer catastrophic forgetting: easily forgetting previously seen data after training with new data. We observe that previous incremental learning algorithms are limited by either low performance or memory scalability issues. As such, we develop a Memory-Efficient Incremental Learning algorithm for NeRF (MEIL-NeRF). MEIL-NeRF takes inspiration from NeRF itself in that a neural network can serve as a memory that provides the pixel RGB values, given rays as queries. Upon the motivation, our framework learns which rays to query NeRF to extract previous pixel values. The extracted pixel values are then used to train NeRF in a self-distillation manner to prevent catastrophic forgetting. As a result, MEIL-NeRF demonstrates constant memory consumption and competitive performance.