论文标题

T4DT:学习时间3D视觉数据的张力时间

T4DT: Tensorizing Time for Learning Temporal 3D Visual Data

论文作者

Usvyatsov, Mikhail, Ballester-Rippoll, Rafael, Bashaeva, Lina, Schindler, Konrad, Ferrer, Gonzalo, Oseledets, Ivan

论文摘要

与2D栅格图像不同,3D视觉数据处理没有单个主导表示。点云,网格或隐式功能等不同格式都具有其优点和劣势。尽管如此,诸如签名距离函数之类的网格表示在3D中也具有吸引人的属性。特别是,它们提供恒定的随机访问,并且非常适合现代机器学习。不幸的是,网格的存储大小随其尺寸而呈指数增长。因此,即使在中等分辨率下,它们也经常超过内存限制。这项工作建议使用低升量格式,包括Tucker,Tensor Train和Ventics Tensor tensor tensor tensor tensor分解,以压缩时间变化的3D数据。我们的方法迭代地计算,体素化并压缩每个帧的截断符号距离函数,并将张量式截断施加到代表整个4D场景的单个压缩张量中,将所有帧凝结到一个单个压缩张量中。我们表明,低量张量压缩对于存储和查询随时间变化的签名距离功能非常紧凑。它大大降低了4D场景的内存足迹,同时显着保留了它们的几何质量。与现有的基于迭代学习的方法(如DEEPSDF和NERF)不同,我们的方法使用具有理论保证的封闭式算法。

Unlike 2D raster images, there is no single dominant representation for 3D visual data processing. Different formats like point clouds, meshes, or implicit functions each have their strengths and weaknesses. Still, grid representations such as signed distance functions have attractive properties also in 3D. In particular, they offer constant-time random access and are eminently suitable for modern machine learning. Unfortunately, the storage size of a grid grows exponentially with its dimension. Hence they often exceed memory limits even at moderate resolution. This work proposes using low-rank tensor formats, including the Tucker, tensor train, and quantics tensor train decompositions, to compress time-varying 3D data. Our method iteratively computes, voxelizes, and compresses each frame's truncated signed distance function and applies tensor rank truncation to condense all frames into a single, compressed tensor that represents the entire 4D scene. We show that low-rank tensor compression is extremely compact to store and query time-varying signed distance functions. It significantly reduces the memory footprint of 4D scenes while remarkably preserving their geometric quality. Unlike existing, iterative learning-based approaches like DeepSDF and NeRF, our method uses a closed-form algorithm with theoretical guarantees.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源