论文标题
通过自适应图推理学习光流
Learning Optical Flow with Adaptive Graph Reasoning
论文作者
论文摘要
在视频理解和分析中,估计视频帧之间的人均运动(称为光流)是一个长期存在的问题。大多数当代的光流技术在很大程度上集中于解决与特征相似性的跨图像匹配,很少有方法考虑如何明确推理给定场景以实现整体运动的理解。在这项工作中,从新的角度看,我们引入了一种基于图形的新方法,称为光流的自适应图推理(AGFLOW),以强调光流中场景/上下文信息的价值。我们的关键想法是将上下文推理与匹配过程解除,并利用场景信息来通过学习通过自适应图来有效地帮助运动估计。所提出的AGFlow可以有效利用上下文信息并将其纳入匹配过程中,从而产生更强大和准确的结果。在Sintel的清洁和最终传球上,我们的AGFlow以1.43和2.47像素的EPE达到了最佳准确性,分别超过了最先进的方法,分别超过了11.2%和13.6%。
Estimating per-pixel motion between video frames, known as optical flow, is a long-standing problem in video understanding and analysis. Most contemporary optical flow techniques largely focus on addressing the cross-image matching with feature similarity, with few methods considering how to explicitly reason over the given scene for achieving a holistic motion understanding. In this work, taking a fresh perspective, we introduce a novel graph-based approach, called adaptive graph reasoning for optical flow (AGFlow), to emphasize the value of scene/context information in optical flow. Our key idea is to decouple the context reasoning from the matching procedure, and exploit scene information to effectively assist motion estimation by learning to reason over the adaptive graph. The proposed AGFlow can effectively exploit the context information and incorporate it within the matching procedure, producing more robust and accurate results. On both Sintel clean and final passes, our AGFlow achieves the best accuracy with EPE of 1.43 and 2.47 pixels, outperforming state-of-the-art approaches by 11.2% and 13.6%, respectively.