论文标题
semanticbevfusion:在统一鸟的眼睛视图表示3D对象检测中重新考虑激光镜像式融合
SemanticBEVFusion: Rethink LiDAR-Camera Fusion in Unified Bird's-Eye View Representation for 3D Object Detection
论文作者
论文摘要
LIDAR和相机是两个基本传感器,用于自动驾驶中的3D对象检测。 LIDAR提供准确可靠的3D几何信息,而相机为颜色提供丰富的纹理。尽管将这两个互补传感器融合在一起越来越普及,但仍在如何有效地将3D激光云与2D相机图像融合在一起的挑战。最近的方法着眼于点级融合,在透视视图或Bird's-eye视图(BEV) - 级融合中绘制了LiDAR Point Cloud的绘制,从而统一了BEV表示中的多模式特征。在本文中,我们重新考虑了这些先前的融合策略,并分析了它们的信息丢失以及对几何和语义特征的影响。我们将SemanticBevFusion介绍给在统一的BEV表示中具有LIDAR功能的深层融合摄像头功能,同时保持3D对象检测的每种模式强度。我们的方法在大规模的Nuscenes数据集上实现了最先进的性能,尤其是对于挑战性的远处对象。该代码将公开可用。
LiDAR and camera are two essential sensors for 3D object detection in autonomous driving. LiDAR provides accurate and reliable 3D geometry information while the camera provides rich texture with color. Despite the increasing popularity of fusing these two complementary sensors, the challenge remains in how to effectively fuse 3D LiDAR point cloud with 2D camera images. Recent methods focus on point-level fusion which paints the LiDAR point cloud with camera features in the perspective view or bird's-eye view (BEV)-level fusion which unifies multi-modality features in the BEV representation. In this paper, we rethink these previous fusion strategies and analyze their information loss and influences on geometric and semantic features. We present SemanticBEVFusion to deeply fuse camera features with LiDAR features in a unified BEV representation while maintaining per-modality strengths for 3D object detection. Our method achieves state-of-the-art performance on the large-scale nuScenes dataset, especially for challenging distant objects. The code will be made publicly available.