论文标题

在室内环境中,无监督的视觉进程和动作集成

Unsupervised Visual Odometry and Action Integration for PointGoal Navigation in Indoor Environment

论文作者

Cao, Yijun, Zhang, Xianshi, Luo, Fuya, Lin, Chuan, Li, Yongjie

论文摘要

室内环境中的PointGoal导航是个人机器人导航到指定点的基本任务。最近的研究在光图像模拟的环境中以接近完美的成功率解决了这一点目标导航任务,在无噪声驱动的假设下,最重要的是,具有GPS和Compass传感器的完美定位。但是,在实际室内环境中难以获得准确的GPS信号。为了提高没有GPS信号的点量导航精度,我们使用视觉探光(VO)并提出了一个新颖的动作集成模块(AIM),以无监督的方式训练。 Sepection,无监督的VO从两个相邻帧的重新投影误差中计算代理的相对姿势,然后用路径积分替换精确的GPS信号。 VO估计的伪位置用于训练行动集成,该集成有助于代理更新其内部对位置的看法,并有助于提高导航的成功率。培训和推理过程仅使用RGB,深度,碰撞以及自我行动信息。实验表明,所提出的系统可实现令人满意的结果,并优于流行的Gibson数据集上部分监督的学习算法。

PointGoal navigation in indoor environment is a fundamental task for personal robots to navigate to a specified point. Recent studies solved this PointGoal navigation task with near-perfect success rate in photo-realistically simulated environments, under the assumptions with noiseless actuation and most importantly, perfect localization with GPS and compass sensors. However, accurate GPS signalis difficult to be obtained in real indoor environment. To improve the PointGoal navigation accuracy without GPS signal, we use visual odometry (VO) and propose a novel action integration module (AIM) trained in unsupervised manner. Sepecifically, unsupervised VO computes the relative pose of the agent from the re-projection error of two adjacent frames, and then replaces the accurate GPS signal with the path integration. The pseudo position estimated by VO is used to train action integration which assists agent to update their internal perception of location and helps improve the success rate of navigation. The training and inference process only use RGB, depth, collision as well as self-action information. The experiments show that the proposed system achieves satisfactory results and outperforms the partially supervised learning algorithms on the popular Gibson dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源