论文标题
语义视频预测中的模块化动作概念接地
Modular Action Concept Grounding in Semantic Video Prediction
论文作者
论文摘要
视频预测的最新作品主要集中在被动预测和低级动作条件预测上,这避免了对代理与对象之间的相互作用的学习。我们介绍了语义动作条件视频预测的任务,该预测使用语义动作标签来描述这些相互作用,并被视为动作识别的反面问题。这项新任务的挑战主要在于如何有效地告知语义动作信息模型。受专家混合概念的启发,我们通过各种视觉概念学习者的结构化组合体现了每个抽象标签,并提出了一种新颖的视频预测模型,模块化动作概念网络(MAC)。我们的方法在两个新设计的合成数据集,CLEVR建筑块和Sapien-Kitchen以及一个称为Tower-Creation的真实数据集上进行了评估。广泛的实验表明,MAC可以在给定指令上正确条件,并生成相应的未来帧,而无需边界框。我们进一步表明,受过训练的模型可以构成分布式概括,迅速适应新的对象类别,并利用其学习的功能进行对象检测,显示出对高级认知能力的发展。可以在http://www.pair.toronto.edu/mac/上找到更多可视化。
Recent works in video prediction have mainly focused on passive forecasting and low-level action-conditional prediction, which sidesteps the learning of interaction between agents and objects. We introduce the task of semantic action-conditional video prediction, which uses semantic action labels to describe those interactions and can be regarded as an inverse problem of action recognition. The challenge of this new task primarily lies in how to effectively inform the model of semantic action information. Inspired by the idea of Mixture of Experts, we embody each abstract label by a structured combination of various visual concept learners and propose a novel video prediction model, Modular Action Concept Network (MAC). Our method is evaluated on two newly designed synthetic datasets, CLEVR-Building-Blocks and Sapien-Kitchen, and one real-world dataset called Tower-Creation. Extensive experiments demonstrate that MAC can correctly condition on given instructions and generate corresponding future frames without need of bounding boxes. We further show that the trained model can make out-of-distribution generalization, be quickly adapted to new object categories and exploit its learnt features for object detection, showing the progression towards higher-level cognitive abilities. More visualizations can be found at http://www.pair.toronto.edu/mac/.