论文标题
DXQ-NET:使用质量感知流量
DXQ-Net: Differentiable LiDAR-Camera Extrinsic Calibration Using Quality-aware Flow
论文作者
论文摘要
精确的LiDAR相机外部校准是移动机器人中许多多传感器系统的前提。大多数校准方法都依赖于费力的手动操作和校准目标。在线工作时,校准方法应该能够从环境中提取信息来构建跨模式数据关联。卷积神经网络(CNN)具有强大的特征提取能力,已用于校准。但是,过去的大多数方法将外部方法作为回归任务解决,而无需考虑所涉及的几何约束。在本文中,我们提出了一种名为DXQ-NET的新型端到端外部校准方法,它使用可区分的姿势估计模块进行概括。我们为激光摄像机校准流提供了一个概率模型,得出了不确定性的预测,以衡量激光摄像机数据关联的质量。测试实验表明,我们的方法与旋转组件的翻译组件和最新性能的其他方法达到了竞争。概括实验表明,我们方法的泛化性能明显优于其他基于深度学习的方法。
Accurate LiDAR-camera extrinsic calibration is a precondition for many multi-sensor systems in mobile robots. Most calibration methods rely on laborious manual operations and calibration targets. While working online, the calibration methods should be able to extract information from the environment to construct the cross-modal data association. Convolutional neural networks (CNNs) have powerful feature extraction ability and have been used for calibration. However, most of the past methods solve the extrinsic as a regression task, without considering the geometric constraints involved. In this paper, we propose a novel end-to-end extrinsic calibration method named DXQ-Net, using a differentiable pose estimation module for generalization. We formulate a probabilistic model for LiDAR-camera calibration flow, yielding a prediction of uncertainty to measure the quality of LiDAR-camera data association. Testing experiments illustrate that our method achieves a competitive with other methods for the translation component and state-of-the-art performance for the rotation component. Generalization experiments illustrate that the generalization performance of our method is significantly better than other deep learning-based methods.