论文标题
仔细观察3D视觉的自我监督预训练的预训练
A Closer Look at Invariances in Self-supervised Pre-training for 3D Vision
论文作者
论文摘要
近年来,对3D视力的自我监督预训练引起了人们的研究兴趣。为了学习信息的表示,许多以前的作品都利用了3D功能的不向导,例如,在同一场景的视图,深度和RGB图像之间的模态侵权次数之间的透视差异,点云和体素之间的格式不变。尽管他们取得了令人鼓舞的结果,但以前的研究缺乏对这些不断增长的系统性比较。为了解决这个问题,我们的工作首次介绍了一个统一的框架,根据该框架可以研究各种预培训方法。我们进行了广泛的实验,并仔细研究了3D预训练中不同不变的贡献。另外,我们提出了一种简单但有效的方法,该方法共同预先培训了3D编码器和使用对比度学习的深度图编码器。通过我们的方法预先训练的模型可以在下游任务中显着提高性能。例如,预先训练的投票表现优于Sun RGB-D和Scannet对象检测基准的先前方法,并具有明显的利润。
Self-supervised pre-training for 3D vision has drawn increasing research interest in recent years. In order to learn informative representations, a lot of previous works exploit invariances of 3D features, e.g., perspective-invariance between views of the same scene, modality-invariance between depth and RGB images, format-invariance between point clouds and voxels. Although they have achieved promising results, previous researches lack a systematic and fair comparison of these invariances. To address this issue, our work, for the first time, introduces a unified framework, under which various pre-training methods can be investigated. We conduct extensive experiments and provide a closer look at the contributions of different invariances in 3D pre-training. Also, we propose a simple but effective method that jointly pre-trains a 3D encoder and a depth map encoder using contrastive learning. Models pre-trained with our method gain significant performance boost in downstream tasks. For instance, a pre-trained VoteNet outperforms previous methods on SUN RGB-D and ScanNet object detection benchmarks with a clear margin.