论文标题
视觉语言导航的对象和动作感知模型
Object-and-Action Aware Model for Visual Language Navigation
论文作者
论文摘要
视觉和语言导航(VLN)的独特之处在于,它需要根据可见的环境将相对一般的自然语言指令转变为机器人代理动作。这需要从两种非常不同类型的自然语言信息中提取价值。第一个是对象描述(例如,“表”,“门”),每个都作为代理人通过在环境中可见的项目来确定下一个动作的提示,第二个是动作规范(例如,'go tright','左转','左转'),这使得机器人可以直接预测下一个动作而无需依靠视觉感知。但是,大多数现有方法几乎不关注指令编码期间彼此区分这些信息,并将匹配的文本对象/动作编码与候选视图的视觉感知/方向特征之间的匹配。在本文中,我们提出了一个对象和行动意识模型(OAAM),该模型分别处理这两种不同形式的自然语言教学。这使每个过程都可以灵活地将以对象为中心/以动作为中心的指令匹配其自己的视觉感知/动作方向。但是,上述解决方案引起的一个侧面是指令中提到的对象可以在两个或多个候选观点的方向上观察到,因此,OAAM可能无法预测最短路径上的观点。为了解决这个问题,我们设计了一种简单但有效的途径损失,以惩罚与地面真相路径偏离地面的轨迹。实验结果证明了所提出的模型和路径损失的有效性,以及在R2R数据集上与50%SPL分数的优势,在未看到的环境中R4R数据集对R4R数据集的CLS得分为40%,超过了先前的最新ART。
Vision-and-Language Navigation (VLN) is unique in that it requires turning relatively general natural-language instructions into robot agent actions, on the basis of the visible environment. This requires to extract value from two very different types of natural-language information. The first is object description (e.g., 'table', 'door'), each presenting as a tip for the agent to determine the next action by finding the item visible in the environment, and the second is action specification (e.g., 'go straight', 'turn left') which allows the robot to directly predict the next movements without relying on visual perceptions. However, most existing methods pay few attention to distinguish these information from each other during instruction encoding and mix together the matching between textual object/action encoding and visual perception/orientation features of candidate viewpoints. In this paper, we propose an Object-and-Action Aware Model (OAAM) that processes these two different forms of natural language based instruction separately. This enables each process to match object-centered/action-centered instruction to their own counterpart visual perception/action orientation flexibly. However, one side-issue caused by above solution is that an object mentioned in instructions may be observed in the direction of two or more candidate viewpoints, thus the OAAM may not predict the viewpoint on the shortest path as the next action. To handle this problem, we design a simple but effective path loss to penalize trajectories deviating from the ground truth path. Experimental results demonstrate the effectiveness of the proposed model and path loss, and the superiority of their combination with a 50% SPL score on the R2R dataset and a 40% CLS score on the R4R dataset in unseen environments, outperforming the previous state-of-the-art.