论文标题
在存在姿势错误的情况下,可靠的协作3D对象检测
Robust Collaborative 3D Object Detection in Presence of Pose Errors
论文作者
论文摘要
协作3D对象检测利用了多种代理之间的信息交换,以在存在诸如遮挡之类的传感器损伤的情况下增强对象检测的准确性。但是,实际上,由于本地化不完美而引起的构成估计错误将导致空间消息未对准并大大降低协作的性能。为了减轻姿势错误的不利影响,我们提出了Coalign,这是一个新型的混合协作框架,对未知的姿势错误是可靠的。提出的解决方案依赖于新型的代理 - 对象姿势图建模来增强协作剂之间的姿势一致性。此外,我们采用多尺度数据融合策略来汇总多个空间分辨率的中间特征。与以前的工作相比,需要基本真相进行训练监督,我们提出的煤炭更加实用,因为它不需要培训中的任何地面姿势监督,并且对姿势错误没有具体的假设。对所提出的方法进行了广泛的评估,在多个数据集上进行,证明煤层会显着降低相对定位误差并在存在姿势错误时达到最先进的检测性能。代码可用于在https://github.com/yifanlu0227/coalign上使用研究社区。
Collaborative 3D object detection exploits information exchange among multiple agents to enhance accuracy of object detection in presence of sensor impairments such as occlusion. However, in practice, pose estimation errors due to imperfect localization would cause spatial message misalignment and significantly reduce the performance of collaboration. To alleviate adverse impacts of pose errors, we propose CoAlign, a novel hybrid collaboration framework that is robust to unknown pose errors. The proposed solution relies on a novel agent-object pose graph modeling to enhance pose consistency among collaborating agents. Furthermore, we adopt a multi-scale data fusion strategy to aggregate intermediate features at multiple spatial resolutions. Comparing with previous works, which require ground-truth pose for training supervision, our proposed CoAlign is more practical since it doesn't require any ground-truth pose supervision in the training and makes no specific assumptions on pose errors. Extensive evaluation of the proposed method is carried out on multiple datasets, certifying that CoAlign significantly reduce relative localization error and achieving the state of art detection performance when pose errors exist. Code are made available for the use of the research community at https://github.com/yifanlu0227/CoAlign.