论文标题
Dynaslam II:紧密耦合的多对象跟踪和猛击
DynaSLAM II: Tightly-Coupled Multi-Object Tracking and SLAM
论文作者
论文摘要
场景刚度的假设在视觉大满贯算法中很常见。但是,它限制了它们在人口稠密的现实环境中的适用性。此外,大多数场景在内,包括自主驾驶,多机器人协作以及增强/虚拟现实,都需要周围环境的明确运动信息,以帮助决策和场景理解。我们在本文Dynaslam II中介绍了一个视觉大满贯系统,用于立体声和RGB-D配置,可紧密整合多物体跟踪功能。 Dynaslam II利用实例语义细分和ORB功能来跟踪动态对象。静态场景和动态物体的结构与新颖的束调整建议中的相机和移动剂的轨迹共同优化。在固定的时间窗口内,还估算了对象的3D边界框。我们证明,跟踪动态对象不仅为场景理解提供了丰富的线索,而且对相机跟踪也有益。 该项目代码将在接受后发布。
The assumption of scene rigidity is common in visual SLAM algorithms. However, it limits their applicability in populated real-world environments. Furthermore, most scenarios including autonomous driving, multi-robot collaboration and augmented/virtual reality, require explicit motion information of the surroundings to help with decision making and scene understanding. We present in this paper DynaSLAM II, a visual SLAM system for stereo and RGB-D configurations that tightly integrates the multi-object tracking capability. DynaSLAM II makes use of instance semantic segmentation and of ORB features to track dynamic objects. The structure of the static scene and of the dynamic objects is optimized jointly with the trajectories of both the camera and the moving agents within a novel bundle adjustment proposal. The 3D bounding boxes of the objects are also estimated and loosely optimized within a fixed temporal window. We demonstrate that tracking dynamic objects does not only provide rich clues for scene understanding but is also beneficial for camera tracking. The project code will be released upon acceptance.