论文标题

解除武器:3D检测的位移意识关系模块

DisARM: Displacement Aware Relation Module for 3D Detection

论文作者

Duan, Yao, Zhu, Chenyang, Lan, Yuqing, Yi, Renjiao, Liu, Xinwang, Xu, Kai

论文摘要

我们介绍了位移感知关系模块(DISAL),这是一种新型的神经网络模块,可在点云场景中增强3D对象检测的性能。我们方法的核心思想是,上下文信息对于确定实例几何不完整或毫无特征的差异至关重要。我们发现,提案之间的关系提供了描述上下文的良好表示。但是,采用所有对象或补丁提案进行检测的关系效率低下,而本地和全球关系的不平衡组合会带来额外的噪音,这可能会误导培训。我们没有与所有关系合作,而是发现只有在大多数代表性或锚点之间进行关系的培训可以显着提高检测性能。良好的锚应该是语义意识的,没有歧义,也应该独立于其他锚。为了找到锚点,我们首先使用具有物体感知的采样方法执行初步关系锚模块,然后设计一个基于位移的模块,以权衡关系重要性,以更好地利用上下文信息。当插入最新的检测器中时,这种轻巧的关系模块会导致对象实例检测的精度明显更高。对现实世界场景的公共基准进行评估表明,我们的方法在Sun RGB-D和Scannet V2上都能达到最先进的性能。

We introduce Displacement Aware Relation Module (DisARM), a novel neural network module for enhancing the performance of 3D object detection in point cloud scenes. The core idea of our method is that contextual information is critical to tell the difference when the instance geometry is incomplete or featureless. We find that relations between proposals provide a good representation to describe the context. However, adopting relations between all the object or patch proposals for detection is inefficient, and an imbalanced combination of local and global relations brings extra noise that could mislead the training. Rather than working with all relations, we found that training with relations only between the most representative ones, or anchors, can significantly boost the detection performance. A good anchor should be semantic-aware with no ambiguity and independent with other anchors as well. To find the anchors, we first perform a preliminary relation anchor module with an objectness-aware sampling approach and then devise a displacement-based module for weighing the relation importance for better utilization of contextual information. This lightweight relation module leads to significantly higher accuracy of object instance detection when being plugged into the state-of-the-art detectors. Evaluations on the public benchmarks of real-world scenes show that our method achieves state-of-the-art performance on both SUN RGB-D and ScanNet V2.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源