论文标题
cramnet:摄像机 - 雷达融合带有射线约束的交叉注意3D对象检测
CramNet: Camera-Radar Fusion with Ray-Constrained Cross-Attention for Robust 3D Object Detection
论文作者
论文摘要
强大的3D对象检测对于安全自动驾驶至关重要。相机和雷达传感器是协同的,因为它们捕获了互补的信息,并且在不同的环境条件下正常工作。然而,随着每个传感器缺乏沿垂直轴的信息缺乏信息,即相机的深度是相机未知的,并且雷达的高程未知。我们提出了摄像机雷达匹配网络cramnet,这是一种有效的方法,可以在连接3D空间中从摄像头和雷达中融合传感器读数。为了利用雷达范围的测量来进行更好的相机深度预测,我们提出了一种新型的射线限制的跨注意机制,可以解决相机功能和雷达功能之间的几何对应关系中的歧义。我们的方法支持传感器方式辍学的训练,即使相机或雷达传感器突然在车辆上出现故障,也会导致3D对象检测。我们通过在Radiate数据集上进行大量实验来证明我们的融合方法的有效性,这是提供雷达射频图像的少数大规模数据集之一。我们方法的仅相机变体可以在Waymo打开数据集中的单眼3D对象检测中实现竞争性能。
Robust 3D object detection is critical for safe autonomous driving. Camera and radar sensors are synergistic as they capture complementary information and work well under different environmental conditions. Fusing camera and radar data is challenging, however, as each of the sensors lacks information along a perpendicular axis, that is, depth is unknown to camera and elevation is unknown to radar. We propose the camera-radar matching network CramNet, an efficient approach to fuse the sensor readings from camera and radar in a joint 3D space. To leverage radar range measurements for better camera depth predictions, we propose a novel ray-constrained cross-attention mechanism that resolves the ambiguity in the geometric correspondences between camera features and radar features. Our method supports training with sensor modality dropout, which leads to robust 3D object detection, even when a camera or radar sensor suddenly malfunctions on a vehicle. We demonstrate the effectiveness of our fusion approach through extensive experiments on the RADIATE dataset, one of the few large-scale datasets that provide radar radio frequency imagery. A camera-only variant of our method achieves competitive performance in monocular 3D object detection on the Waymo Open Dataset.