论文标题
AFT-VO:多视觉视觉探针估计的异步融合变压器
AFT-VO: Asynchronous Fusion Transformers for Multi-View Visual Odometry Estimation
论文作者
论文摘要
运动估计方法通常采用传感器融合技术(例如Kalman滤波器)来处理单个传感器故障。最近,已经提出了基于深度学习的融合方法,提高了性能并需要更少的模型特定实现。但是,当前的深层融合方法通常认为传感器是同步的,这并不总是实用的,尤其是对于低成本硬件。为了解决这一局限性,在这项工作中,我们提出了AFT-VO,这是一种新型的基于变压器的传感器融合体系结构来估算多个传感器的VO。我们的框架结合了异步多视觉摄像机的预测,并说明了来自不同来源的测量值的时间差异。 我们的方法首先采用混合密度网络(MDN)来估计系统中每个相机的6-DOF姿势的概率分布。然后引入了一种新型的基于变压器的融合模块AFT-VO,该模块结合了这些异步姿势估计以及它们的信心。更具体地说,我们引入了离散和源编码技术,该技术使多源异步信号的融合。 我们在流行的Nuscenes和Kitti数据集上评估了我们的方法。我们的实验表明,用于VO估计的多视图融合提供了稳健而准确的轨迹,在挑战性的天气和照明条件下都超过了最新技术。
Motion estimation approaches typically employ sensor fusion techniques, such as the Kalman Filter, to handle individual sensor failures. More recently, deep learning-based fusion approaches have been proposed, increasing the performance and requiring less model-specific implementations. However, current deep fusion approaches often assume that sensors are synchronised, which is not always practical, especially for low-cost hardware. To address this limitation, in this work, we propose AFT-VO, a novel transformer-based sensor fusion architecture to estimate VO from multiple sensors. Our framework combines predictions from asynchronous multi-view cameras and accounts for the time discrepancies of measurements coming from different sources. Our approach first employs a Mixture Density Network (MDN) to estimate the probability distributions of the 6-DoF poses for every camera in the system. Then a novel transformer-based fusion module, AFT-VO, is introduced, which combines these asynchronous pose estimations, along with their confidences. More specifically, we introduce Discretiser and Source Encoding techniques which enable the fusion of multi-source asynchronous signals. We evaluate our approach on the popular nuScenes and KITTI datasets. Our experiments demonstrate that multi-view fusion for VO estimation provides robust and accurate trajectories, outperforming the state of the art in both challenging weather and lighting conditions.