论文标题

R-agno-RPN:一种激光摄像机区域深网,用于分辨率检测

R-AGNO-RPN: A LIDAR-Camera Region Deep Network for Resolution-Agnostic Detection

论文作者

Théodose, Ruddy, Denis, Dieumet, Chateau, Thierry, Frémont, Vincent, Checchin, Paul

论文摘要

当前基于神经网络的对象检测方法处理LiDAR点云通常是通过一种激光雷达传感器训练的。但是,在测试数据的数据中,它们的性能会降低,这些数据来自不同的激光雷达传感器,即训练的传感器,即具有不同的点云分辨率。在本文中,R-Agno-RPN是建立在3D点云和RGB图像融合的区域提案网络中,无论点云分辨率如何,都提出了3D对象检测。由于我们的方法旨在也应用于低点云分辨率,因此所提出的方法着重于对象定位,而不是估算还原数据的精制框。通过精确映射到Bird的眼视图和特定的数据增强程序,可以改善RGB图像的贡献,从而获得了对低分辨率点云的弹性。为了显示所提出的网络处理不同点云分辨率的能力,对来自Kitti 3D对象检测和Nuscenes数据集的数据进行了实验。此外,为了评估其性能,我们的方法与众所周知的3D检测网络Pointpillars进行了比较。实验结果表明,即使在其原始点的$ 80 \%$的点云数据上,我们的方法仍然能够提供相关的建议本地化。

Current neural networks-based object detection approaches processing LiDAR point clouds are generally trained from one kind of LiDAR sensors. However, their performances decrease when they are tested with data coming from a different LiDAR sensor than the one used for training, i.e., with a different point cloud resolution. In this paper, R-AGNO-RPN, a region proposal network built on fusion of 3D point clouds and RGB images is proposed for 3D object detection regardless of point cloud resolution. As our approach is designed to be also applied on low point cloud resolutions, the proposed method focuses on object localization instead of estimating refined boxes on reduced data. The resilience to low-resolution point cloud is obtained through image features accurately mapped to Bird's Eye View and a specific data augmentation procedure that improves the contribution of the RGB images. To show the proposed network's ability to deal with different point clouds resolutions, experiments are conducted on both data coming from the KITTI 3D Object Detection and the nuScenes datasets. In addition, to assess its performances, our method is compared to PointPillars, a well-known 3D detection network. Experimental results show that even on point cloud data reduced by $80\%$ of its original points, our method is still able to deliver relevant proposals localization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源